Text-to-Image
Diffusers
Safetensors
stable-diffusion
stable-diffusion-diffusers
controlnet
diffusers-training
Instructions to use Amitz244/output_dir_controlnet with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use Amitz244/output_dir_controlnet with Diffusers:
pip install -U diffusers transformers accelerate
from diffusers import ControlNetModel, StableDiffusionControlNetPipeline controlnet = ControlNetModel.from_pretrained("Amitz244/output_dir_controlnet") pipe = StableDiffusionControlNetPipeline.from_pretrained( "stabilityai/stable-diffusion-2-1-base", controlnet=controlnet ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
controlnet-Amitz244/output_dir_controlnet
These are controlnet weights trained on stabilityai/stable-diffusion-2-1-base with new type of conditioning. You can find some example images below.
prompt: Woman in blue and black on a large plaza.
prompt: A men's restroom showcasing the toilet through an open door.
prompt: A man riding a kiteboard over the ocean under a cloudy sky.
prompt: Two skiers stand on their skis in the snow.
prompt: A meal of cheese toast, spaghetti, and broccoli on a white plate.

Intended uses & limitations
How to use
# TODO: add an example code snippet for running this diffusion pipeline
Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
Training details
[TODO: describe the data used to train the model]
- Downloads last month
- -
Model tree for Amitz244/output_dir_controlnet
Base model
stabilityai/stable-diffusion-2-1-base