Datasets:

ArXiv:
diffusers-benchmarking-bot commited on
Commit
af9b156
·
verified ·
1 Parent(s): 9d521f1

Upload folder using huggingface_hub

Browse files
main/README.md CHANGED
@@ -86,6 +86,7 @@ PIXART-α Controlnet pipeline | Implementation of the controlnet model for pixar
86
  | Perturbed-Attention Guidance |StableDiffusionPAGPipeline is a modification of StableDiffusionPipeline to support Perturbed-Attention Guidance (PAG).|[Perturbed-Attention Guidance](#perturbed-attention-guidance)|[Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/perturbed_attention_guidance.ipynb)|[Hyoungwon Cho](https://github.com/HyoungwonCho)|
87
  | CogVideoX DDIM Inversion Pipeline | Implementation of DDIM inversion and guided attention-based editing denoising process on CogVideoX. | [CogVideoX DDIM Inversion Pipeline](#cogvideox-ddim-inversion-pipeline) | - | [LittleNyima](https://github.com/LittleNyima) |
88
  | FaithDiff Stable Diffusion XL Pipeline | Implementation of [(CVPR 2025) FaithDiff: Unleashing Diffusion Priors for Faithful Image Super-resolutionUnleashing Diffusion Priors for Faithful Image Super-resolution](https://arxiv.org/abs/2411.18824) - FaithDiff is a faithful image super-resolution method that leverages latent diffusion models by actively adapting the diffusion prior and jointly fine-tuning its components (encoder and diffusion model) with an alignment module to ensure high fidelity and structural consistency. | [FaithDiff Stable Diffusion XL Pipeline](#faithdiff-stable-diffusion-xl-pipeline) | [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/jychen9811/FaithDiff) | [Junyang Chen, Jinshan Pan, Jiangxin Dong, IMAG Lab, (Adapted by Eliseu Silva)](https://github.com/JyChen9811/FaithDiff) |
 
89
  To load a custom pipeline you just need to pass the `custom_pipeline` argument to `DiffusionPipeline`, as one of the files in `diffusers/examples/community`. Feel free to send a PR with your own pipelines, we will merge them quickly.
90
 
91
  ```py
@@ -5432,4 +5433,50 @@ cropped_image = gen_image.crop((0, 0, width_init, height_init))
5432
  cropped_image.save("data/result.png")
5433
  ````
5434
  ### Result
5435
- [<img src="https://huggingface.co/datasets/DEVAIEXP/assets/resolve/main/faithdiff_restored.PNG" width="512px" height="512px"/>](https://imgsli.com/MzY1NzE2)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
86
  | Perturbed-Attention Guidance |StableDiffusionPAGPipeline is a modification of StableDiffusionPipeline to support Perturbed-Attention Guidance (PAG).|[Perturbed-Attention Guidance](#perturbed-attention-guidance)|[Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/perturbed_attention_guidance.ipynb)|[Hyoungwon Cho](https://github.com/HyoungwonCho)|
87
  | CogVideoX DDIM Inversion Pipeline | Implementation of DDIM inversion and guided attention-based editing denoising process on CogVideoX. | [CogVideoX DDIM Inversion Pipeline](#cogvideox-ddim-inversion-pipeline) | - | [LittleNyima](https://github.com/LittleNyima) |
88
  | FaithDiff Stable Diffusion XL Pipeline | Implementation of [(CVPR 2025) FaithDiff: Unleashing Diffusion Priors for Faithful Image Super-resolutionUnleashing Diffusion Priors for Faithful Image Super-resolution](https://arxiv.org/abs/2411.18824) - FaithDiff is a faithful image super-resolution method that leverages latent diffusion models by actively adapting the diffusion prior and jointly fine-tuning its components (encoder and diffusion model) with an alignment module to ensure high fidelity and structural consistency. | [FaithDiff Stable Diffusion XL Pipeline](#faithdiff-stable-diffusion-xl-pipeline) | [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/jychen9811/FaithDiff) | [Junyang Chen, Jinshan Pan, Jiangxin Dong, IMAG Lab, (Adapted by Eliseu Silva)](https://github.com/JyChen9811/FaithDiff) |
89
+ | Stable Diffusion 3 InstructPix2Pix Pipeline | Implementation of Stable Diffusion 3 InstructPix2Pix Pipeline | [Stable Diffusion 3 InstructPix2Pix Pipeline](#stable-diffusion-3-instructpix2pix-pipeline) | [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/BleachNick/SD3_UltraEdit_freeform) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/CaptainZZZ/sd3-instructpix2pix) | [Jiayu Zhang](https://github.com/xduzhangjiayu) and [Haozhe Zhao](https://github.com/HaozheZhao)|
90
  To load a custom pipeline you just need to pass the `custom_pipeline` argument to `DiffusionPipeline`, as one of the files in `diffusers/examples/community`. Feel free to send a PR with your own pipelines, we will merge them quickly.
91
 
92
  ```py
 
5433
  cropped_image.save("data/result.png")
5434
  ````
5435
  ### Result
5436
+ [<img src="https://huggingface.co/datasets/DEVAIEXP/assets/resolve/main/faithdiff_restored.PNG" width="512px" height="512px"/>](https://imgsli.com/MzY1NzE2)
5437
+
5438
+
5439
+ # Stable Diffusion 3 InstructPix2Pix Pipeline
5440
+ This the implementation of the Stable Diffusion 3 InstructPix2Pix Pipeline, based on the HuggingFace Diffusers.
5441
+
5442
+ ## Example Usage
5443
+ This pipeline aims to edit image based on user's instruction by using SD3
5444
+ ````py
5445
+ import torch
5446
+ from diffusers import SD3Transformer2DModel
5447
+ from diffusers import DiffusionPipeline
5448
+ from diffusers.utils import load_image
5449
+
5450
+
5451
+ resolution = 512
5452
+ image = load_image("https://hf.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png").resize(
5453
+ (resolution, resolution)
5454
+ )
5455
+ edit_instruction = "Turn sky into a sunny one"
5456
+
5457
+
5458
+ pipe = DiffusionPipeline.from_pretrained(
5459
+ "stabilityai/stable-diffusion-3-medium-diffusers", custom_pipeline="pipeline_stable_diffusion_3_instruct_pix2pix", torch_dtype=torch.float16).to('cuda')
5460
+
5461
+ pipe.transformer = SD3Transformer2DModel.from_pretrained("CaptainZZZ/sd3-instructpix2pix",torch_dtype=torch.float16).to('cuda')
5462
+
5463
+ edited_image = pipe(
5464
+ prompt=edit_instruction,
5465
+ image=image,
5466
+ height=resolution,
5467
+ width=resolution,
5468
+ guidance_scale=7.5,
5469
+ image_guidance_scale=1.5,
5470
+ num_inference_steps=30,
5471
+ ).images[0]
5472
+
5473
+ edited_image.save("edited_image.png")
5474
+ ````
5475
+ |Original|Edited|
5476
+ |---|---|
5477
+ |![Original image](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/StableDiffusion3InstructPix2Pix/mountain.png)|![Edited image](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/StableDiffusion3InstructPix2Pix/edited.png)
5478
+
5479
+ ### Note
5480
+ This model is trained on 512x512, so input size is better on 512x512.
5481
+ For better editing performance, please refer to this powerful model https://huggingface.co/BleachNick/SD3_UltraEdit_freeform and Paper "UltraEdit: Instruction-based Fine-Grained Image
5482
+ Editing at Scale", many thanks to their contribution!
main/pipeline_stable_diffusion_3_instruct_pix2pix.py ADDED
@@ -0,0 +1,1266 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2024 Stability AI, The HuggingFace Team and The InstantX Team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ import inspect
16
+ from typing import Any, Callable, Dict, List, Optional, Union
17
+
18
+ import PIL.Image
19
+ import torch
20
+ from transformers import (
21
+ CLIPTextModelWithProjection,
22
+ CLIPTokenizer,
23
+ SiglipImageProcessor,
24
+ SiglipVisionModel,
25
+ T5EncoderModel,
26
+ T5TokenizerFast,
27
+ )
28
+
29
+ from ...image_processor import PipelineImageInput, VaeImageProcessor
30
+ from ...loaders import FromSingleFileMixin, SD3IPAdapterMixin, SD3LoraLoaderMixin
31
+ from ...models.autoencoders import AutoencoderKL
32
+ from ...models.transformers import SD3Transformer2DModel
33
+ from ...schedulers import FlowMatchEulerDiscreteScheduler
34
+ from ...utils import (
35
+ USE_PEFT_BACKEND,
36
+ deprecate,
37
+ is_torch_xla_available,
38
+ logging,
39
+ replace_example_docstring,
40
+ scale_lora_layers,
41
+ unscale_lora_layers,
42
+ )
43
+ from ...utils.torch_utils import randn_tensor
44
+ from ..pipeline_utils import DiffusionPipeline
45
+ from .pipeline_output import StableDiffusion3PipelineOutput
46
+
47
+
48
+ if is_torch_xla_available():
49
+ import torch_xla.core.xla_model as xm
50
+
51
+ XLA_AVAILABLE = True
52
+ else:
53
+ XLA_AVAILABLE = False
54
+
55
+
56
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
57
+
58
+ EXAMPLE_DOC_STRING = """
59
+ Examples:
60
+ ```py
61
+ >>> import torch
62
+ >>> from diffusers import StableDiffusion3InstructPix2PixPipeline
63
+ >>> from diffusers.utils import load_image
64
+
65
+ >>> resolution = 1024
66
+ >>> image = load_image(
67
+ ... "https://hf.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png"
68
+ ... ).resize((resolution, resolution))
69
+ >>> edit_instruction = "Turn sky into a cloudy one"
70
+
71
+ >>> pipe = StableDiffusion3InstructPix2PixPipeline.from_pretrained(
72
+ ... "your_own_model_path", torch_dtype=torch.float16
73
+ ... ).to("cuda")
74
+
75
+ >>> edited_image = pipe(
76
+ ... prompt=edit_instruction,
77
+ ... image=image,
78
+ ... height=resolution,
79
+ ... width=resolution,
80
+ ... guidance_scale=7.5,
81
+ ... image_guidance_scale=1.5,
82
+ ... num_inference_steps=30,
83
+ ... ).images[0]
84
+ >>> edited_image
85
+ ```
86
+ """
87
+
88
+
89
+ # Copied from diffusers.pipelines.flux.pipeline_flux.calculate_shift
90
+ def calculate_shift(
91
+ image_seq_len,
92
+ base_seq_len: int = 256,
93
+ max_seq_len: int = 4096,
94
+ base_shift: float = 0.5,
95
+ max_shift: float = 1.15,
96
+ ):
97
+ m = (max_shift - base_shift) / (max_seq_len - base_seq_len)
98
+ b = base_shift - m * base_seq_len
99
+ mu = image_seq_len * m + b
100
+ return mu
101
+
102
+
103
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_img2img.retrieve_latents
104
+ def retrieve_latents(
105
+ encoder_output: torch.Tensor, generator: Optional[torch.Generator] = None, sample_mode: str = "sample"
106
+ ):
107
+ if hasattr(encoder_output, "latent_dist") and sample_mode == "sample":
108
+ return encoder_output.latent_dist.sample(generator)
109
+ elif hasattr(encoder_output, "latent_dist") and sample_mode == "argmax":
110
+ return encoder_output.latent_dist.mode()
111
+ elif hasattr(encoder_output, "latents"):
112
+ return encoder_output.latents
113
+ else:
114
+ raise AttributeError("Could not access latents of provided encoder_output")
115
+
116
+
117
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.retrieve_timesteps
118
+ def retrieve_timesteps(
119
+ scheduler,
120
+ num_inference_steps: Optional[int] = None,
121
+ device: Optional[Union[str, torch.device]] = None,
122
+ timesteps: Optional[List[int]] = None,
123
+ sigmas: Optional[List[float]] = None,
124
+ **kwargs,
125
+ ):
126
+ r"""
127
+ Calls the scheduler's `set_timesteps` method and retrieves timesteps from the scheduler after the call. Handles
128
+ custom timesteps. Any kwargs will be supplied to `scheduler.set_timesteps`.
129
+
130
+ Args:
131
+ scheduler (`SchedulerMixin`):
132
+ The scheduler to get timesteps from.
133
+ num_inference_steps (`int`):
134
+ The number of diffusion steps used when generating samples with a pre-trained model. If used, `timesteps`
135
+ must be `None`.
136
+ device (`str` or `torch.device`, *optional*):
137
+ The device to which the timesteps should be moved to. If `None`, the timesteps are not moved.
138
+ timesteps (`List[int]`, *optional*):
139
+ Custom timesteps used to override the timestep spacing strategy of the scheduler. If `timesteps` is passed,
140
+ `num_inference_steps` and `sigmas` must be `None`.
141
+ sigmas (`List[float]`, *optional*):
142
+ Custom sigmas used to override the timestep spacing strategy of the scheduler. If `sigmas` is passed,
143
+ `num_inference_steps` and `timesteps` must be `None`.
144
+
145
+ Returns:
146
+ `Tuple[torch.Tensor, int]`: A tuple where the first element is the timestep schedule from the scheduler and the
147
+ second element is the number of inference steps.
148
+ """
149
+ if timesteps is not None and sigmas is not None:
150
+ raise ValueError("Only one of `timesteps` or `sigmas` can be passed. Please choose one to set custom values")
151
+ if timesteps is not None:
152
+ accepts_timesteps = "timesteps" in set(inspect.signature(scheduler.set_timesteps).parameters.keys())
153
+ if not accepts_timesteps:
154
+ raise ValueError(
155
+ f"The current scheduler class {scheduler.__class__}'s `set_timesteps` does not support custom"
156
+ f" timestep schedules. Please check whether you are using the correct scheduler."
157
+ )
158
+ scheduler.set_timesteps(timesteps=timesteps, device=device, **kwargs)
159
+ timesteps = scheduler.timesteps
160
+ num_inference_steps = len(timesteps)
161
+ elif sigmas is not None:
162
+ accept_sigmas = "sigmas" in set(inspect.signature(scheduler.set_timesteps).parameters.keys())
163
+ if not accept_sigmas:
164
+ raise ValueError(
165
+ f"The current scheduler class {scheduler.__class__}'s `set_timesteps` does not support custom"
166
+ f" sigmas schedules. Please check whether you are using the correct scheduler."
167
+ )
168
+ scheduler.set_timesteps(sigmas=sigmas, device=device, **kwargs)
169
+ timesteps = scheduler.timesteps
170
+ num_inference_steps = len(timesteps)
171
+ else:
172
+ scheduler.set_timesteps(num_inference_steps, device=device, **kwargs)
173
+ timesteps = scheduler.timesteps
174
+ return timesteps, num_inference_steps
175
+
176
+
177
+ class StableDiffusion3InstructPix2PixPipeline(
178
+ DiffusionPipeline, SD3LoraLoaderMixin, FromSingleFileMixin, SD3IPAdapterMixin
179
+ ):
180
+ r"""
181
+ Args:
182
+ transformer ([`SD3Transformer2DModel`]):
183
+ Conditional Transformer (MMDiT) architecture to denoise the encoded image latents.
184
+ scheduler ([`FlowMatchEulerDiscreteScheduler`]):
185
+ A scheduler to be used in combination with `transformer` to denoise the encoded image latents.
186
+ vae ([`AutoencoderKL`]):
187
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
188
+ text_encoder ([`CLIPTextModelWithProjection`]):
189
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection),
190
+ specifically the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant,
191
+ with an additional added projection layer that is initialized with a diagonal matrix with the `hidden_size`
192
+ as its dimension.
193
+ text_encoder_2 ([`CLIPTextModelWithProjection`]):
194
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection),
195
+ specifically the
196
+ [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)
197
+ variant.
198
+ text_encoder_3 ([`T5EncoderModel`]):
199
+ Frozen text-encoder. Stable Diffusion 3 uses
200
+ [T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5EncoderModel), specifically the
201
+ [t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl) variant.
202
+ tokenizer (`CLIPTokenizer`):
203
+ Tokenizer of class
204
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
205
+ tokenizer_2 (`CLIPTokenizer`):
206
+ Second Tokenizer of class
207
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
208
+ tokenizer_3 (`T5TokenizerFast`):
209
+ Tokenizer of class
210
+ [T5Tokenizer](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Tokenizer).
211
+ image_encoder (`SiglipVisionModel`, *optional*):
212
+ Pre-trained Vision Model for IP Adapter.
213
+ feature_extractor (`SiglipImageProcessor`, *optional*):
214
+ Image processor for IP Adapter.
215
+ """
216
+
217
+ model_cpu_offload_seq = "text_encoder->text_encoder_2->text_encoder_3->image_encoder->transformer->vae"
218
+ _optional_components = ["image_encoder", "feature_extractor"]
219
+ _callback_tensor_inputs = ["latents", "prompt_embeds", "negative_prompt_embeds", "negative_pooled_prompt_embeds"]
220
+
221
+ def __init__(
222
+ self,
223
+ transformer: SD3Transformer2DModel,
224
+ scheduler: FlowMatchEulerDiscreteScheduler,
225
+ vae: AutoencoderKL,
226
+ text_encoder: CLIPTextModelWithProjection,
227
+ tokenizer: CLIPTokenizer,
228
+ text_encoder_2: CLIPTextModelWithProjection,
229
+ tokenizer_2: CLIPTokenizer,
230
+ text_encoder_3: T5EncoderModel,
231
+ tokenizer_3: T5TokenizerFast,
232
+ image_encoder: SiglipVisionModel = None,
233
+ feature_extractor: SiglipImageProcessor = None,
234
+ ):
235
+ super().__init__()
236
+
237
+ self.register_modules(
238
+ vae=vae,
239
+ text_encoder=text_encoder,
240
+ text_encoder_2=text_encoder_2,
241
+ text_encoder_3=text_encoder_3,
242
+ tokenizer=tokenizer,
243
+ tokenizer_2=tokenizer_2,
244
+ tokenizer_3=tokenizer_3,
245
+ transformer=transformer,
246
+ scheduler=scheduler,
247
+ image_encoder=image_encoder,
248
+ feature_extractor=feature_extractor,
249
+ )
250
+ self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) if getattr(self, "vae", None) else 8
251
+ self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor)
252
+ self.tokenizer_max_length = (
253
+ self.tokenizer.model_max_length if hasattr(self, "tokenizer") and self.tokenizer is not None else 77
254
+ )
255
+ self.default_sample_size = (
256
+ self.transformer.config.sample_size
257
+ if hasattr(self, "transformer") and self.transformer is not None
258
+ else 128
259
+ )
260
+ self.patch_size = (
261
+ self.transformer.config.patch_size if hasattr(self, "transformer") and self.transformer is not None else 2
262
+ )
263
+
264
+ def _get_t5_prompt_embeds(
265
+ self,
266
+ prompt: Union[str, List[str]] = None,
267
+ num_images_per_prompt: int = 1,
268
+ max_sequence_length: int = 256,
269
+ device: Optional[torch.device] = None,
270
+ dtype: Optional[torch.dtype] = None,
271
+ ):
272
+ device = device or self._execution_device
273
+ dtype = dtype or self.text_encoder.dtype
274
+
275
+ prompt = [prompt] if isinstance(prompt, str) else prompt
276
+ batch_size = len(prompt)
277
+
278
+ if self.text_encoder_3 is None:
279
+ return torch.zeros(
280
+ (
281
+ batch_size * num_images_per_prompt,
282
+ self.tokenizer_max_length,
283
+ self.transformer.config.joint_attention_dim,
284
+ ),
285
+ device=device,
286
+ dtype=dtype,
287
+ )
288
+
289
+ text_inputs = self.tokenizer_3(
290
+ prompt,
291
+ padding="max_length",
292
+ max_length=max_sequence_length,
293
+ truncation=True,
294
+ add_special_tokens=True,
295
+ return_tensors="pt",
296
+ )
297
+ text_input_ids = text_inputs.input_ids
298
+ untruncated_ids = self.tokenizer_3(prompt, padding="longest", return_tensors="pt").input_ids
299
+
300
+ if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(text_input_ids, untruncated_ids):
301
+ removed_text = self.tokenizer_3.batch_decode(untruncated_ids[:, self.tokenizer_max_length - 1 : -1])
302
+ logger.warning(
303
+ "The following part of your input was truncated because `max_sequence_length` is set to "
304
+ f" {max_sequence_length} tokens: {removed_text}"
305
+ )
306
+
307
+ prompt_embeds = self.text_encoder_3(text_input_ids.to(device))[0]
308
+
309
+ dtype = self.text_encoder_3.dtype
310
+ prompt_embeds = prompt_embeds.to(dtype=dtype, device=device)
311
+
312
+ _, seq_len, _ = prompt_embeds.shape
313
+
314
+ # duplicate text embeddings and attention mask for each generation per prompt, using mps friendly method
315
+ prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1)
316
+ prompt_embeds = prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1)
317
+
318
+ return prompt_embeds
319
+
320
+ def _get_clip_prompt_embeds(
321
+ self,
322
+ prompt: Union[str, List[str]],
323
+ num_images_per_prompt: int = 1,
324
+ device: Optional[torch.device] = None,
325
+ clip_skip: Optional[int] = None,
326
+ clip_model_index: int = 0,
327
+ ):
328
+ device = device or self._execution_device
329
+
330
+ clip_tokenizers = [self.tokenizer, self.tokenizer_2]
331
+ clip_text_encoders = [self.text_encoder, self.text_encoder_2]
332
+
333
+ tokenizer = clip_tokenizers[clip_model_index]
334
+ text_encoder = clip_text_encoders[clip_model_index]
335
+
336
+ prompt = [prompt] if isinstance(prompt, str) else prompt
337
+ batch_size = len(prompt)
338
+
339
+ text_inputs = tokenizer(
340
+ prompt,
341
+ padding="max_length",
342
+ max_length=self.tokenizer_max_length,
343
+ truncation=True,
344
+ return_tensors="pt",
345
+ )
346
+
347
+ text_input_ids = text_inputs.input_ids
348
+ untruncated_ids = tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
349
+ if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(text_input_ids, untruncated_ids):
350
+ removed_text = tokenizer.batch_decode(untruncated_ids[:, self.tokenizer_max_length - 1 : -1])
351
+ logger.warning(
352
+ "The following part of your input was truncated because CLIP can only handle sequences up to"
353
+ f" {self.tokenizer_max_length} tokens: {removed_text}"
354
+ )
355
+ prompt_embeds = text_encoder(text_input_ids.to(device), output_hidden_states=True)
356
+ pooled_prompt_embeds = prompt_embeds[0]
357
+
358
+ if clip_skip is None:
359
+ prompt_embeds = prompt_embeds.hidden_states[-2]
360
+ else:
361
+ prompt_embeds = prompt_embeds.hidden_states[-(clip_skip + 2)]
362
+
363
+ prompt_embeds = prompt_embeds.to(dtype=self.text_encoder.dtype, device=device)
364
+
365
+ _, seq_len, _ = prompt_embeds.shape
366
+ # duplicate text embeddings for each generation per prompt, using mps friendly method
367
+ prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1)
368
+ prompt_embeds = prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1)
369
+
370
+ pooled_prompt_embeds = pooled_prompt_embeds.repeat(1, num_images_per_prompt, 1)
371
+ pooled_prompt_embeds = pooled_prompt_embeds.view(batch_size * num_images_per_prompt, -1)
372
+
373
+ return prompt_embeds, pooled_prompt_embeds
374
+
375
+ def encode_prompt(
376
+ self,
377
+ prompt: Union[str, List[str]],
378
+ prompt_2: Union[str, List[str]],
379
+ prompt_3: Union[str, List[str]],
380
+ device: Optional[torch.device] = None,
381
+ num_images_per_prompt: int = 1,
382
+ do_classifier_free_guidance: bool = True,
383
+ negative_prompt: Optional[Union[str, List[str]]] = None,
384
+ negative_prompt_2: Optional[Union[str, List[str]]] = None,
385
+ negative_prompt_3: Optional[Union[str, List[str]]] = None,
386
+ prompt_embeds: Optional[torch.FloatTensor] = None,
387
+ negative_prompt_embeds: Optional[torch.FloatTensor] = None,
388
+ pooled_prompt_embeds: Optional[torch.FloatTensor] = None,
389
+ negative_pooled_prompt_embeds: Optional[torch.FloatTensor] = None,
390
+ clip_skip: Optional[int] = None,
391
+ max_sequence_length: int = 256,
392
+ lora_scale: Optional[float] = None,
393
+ ):
394
+ r"""
395
+
396
+ Args:
397
+ prompt (`str` or `List[str]`, *optional*):
398
+ prompt to be encoded
399
+ prompt_2 (`str` or `List[str]`, *optional*):
400
+ The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
401
+ used in all text-encoders
402
+ prompt_3 (`str` or `List[str]`, *optional*):
403
+ The prompt or prompts to be sent to the `tokenizer_3` and `text_encoder_3`. If not defined, `prompt` is
404
+ used in all text-encoders
405
+ device: (`torch.device`):
406
+ torch device
407
+ num_images_per_prompt (`int`):
408
+ number of images that should be generated per prompt
409
+ do_classifier_free_guidance (`bool`):
410
+ whether to use classifier free guidance or not
411
+ negative_prompt (`str` or `List[str]`, *optional*):
412
+ The prompt or prompts not to guide the image generation. If not defined, one has to pass
413
+ `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
414
+ less than `1`).
415
+ negative_prompt_2 (`str` or `List[str]`, *optional*):
416
+ The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
417
+ `text_encoder_2`. If not defined, `negative_prompt` is used in all the text-encoders.
418
+ negative_prompt_3 (`str` or `List[str]`, *optional*):
419
+ The prompt or prompts not to guide the image generation to be sent to `tokenizer_3` and
420
+ `text_encoder_3`. If not defined, `negative_prompt` is used in all the text-encoders.
421
+ prompt_embeds (`torch.FloatTensor`, *optional*):
422
+ Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
423
+ provided, text embeddings will be generated from `prompt` input argument.
424
+ negative_prompt_embeds (`torch.FloatTensor`, *optional*):
425
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
426
+ weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
427
+ argument.
428
+ pooled_prompt_embeds (`torch.FloatTensor`, *optional*):
429
+ Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
430
+ If not provided, pooled text embeddings will be generated from `prompt` input argument.
431
+ negative_pooled_prompt_embeds (`torch.FloatTensor`, *optional*):
432
+ Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
433
+ weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
434
+ input argument.
435
+ clip_skip (`int`, *optional*):
436
+ Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
437
+ the output of the pre-final layer will be used for computing the prompt embeddings.
438
+ lora_scale (`float`, *optional*):
439
+ A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
440
+ """
441
+ device = device or self._execution_device
442
+
443
+ # set lora scale so that monkey patched LoRA
444
+ # function of text encoder can correctly access it
445
+ if lora_scale is not None and isinstance(self, SD3LoraLoaderMixin):
446
+ self._lora_scale = lora_scale
447
+
448
+ # dynamically adjust the LoRA scale
449
+ if self.text_encoder is not None and USE_PEFT_BACKEND:
450
+ scale_lora_layers(self.text_encoder, lora_scale)
451
+ if self.text_encoder_2 is not None and USE_PEFT_BACKEND:
452
+ scale_lora_layers(self.text_encoder_2, lora_scale)
453
+
454
+ prompt = [prompt] if isinstance(prompt, str) else prompt
455
+ if prompt is not None:
456
+ batch_size = len(prompt)
457
+ else:
458
+ batch_size = prompt_embeds.shape[0]
459
+
460
+ if prompt_embeds is None:
461
+ prompt_2 = prompt_2 or prompt
462
+ prompt_2 = [prompt_2] if isinstance(prompt_2, str) else prompt_2
463
+
464
+ prompt_3 = prompt_3 or prompt
465
+ prompt_3 = [prompt_3] if isinstance(prompt_3, str) else prompt_3
466
+
467
+ prompt_embed, pooled_prompt_embed = self._get_clip_prompt_embeds(
468
+ prompt=prompt,
469
+ device=device,
470
+ num_images_per_prompt=num_images_per_prompt,
471
+ clip_skip=clip_skip,
472
+ clip_model_index=0,
473
+ )
474
+ prompt_2_embed, pooled_prompt_2_embed = self._get_clip_prompt_embeds(
475
+ prompt=prompt_2,
476
+ device=device,
477
+ num_images_per_prompt=num_images_per_prompt,
478
+ clip_skip=clip_skip,
479
+ clip_model_index=1,
480
+ )
481
+ clip_prompt_embeds = torch.cat([prompt_embed, prompt_2_embed], dim=-1)
482
+
483
+ t5_prompt_embed = self._get_t5_prompt_embeds(
484
+ prompt=prompt_3,
485
+ num_images_per_prompt=num_images_per_prompt,
486
+ max_sequence_length=max_sequence_length,
487
+ device=device,
488
+ )
489
+
490
+ clip_prompt_embeds = torch.nn.functional.pad(
491
+ clip_prompt_embeds, (0, t5_prompt_embed.shape[-1] - clip_prompt_embeds.shape[-1])
492
+ )
493
+
494
+ prompt_embeds = torch.cat([clip_prompt_embeds, t5_prompt_embed], dim=-2)
495
+ pooled_prompt_embeds = torch.cat([pooled_prompt_embed, pooled_prompt_2_embed], dim=-1)
496
+
497
+ if do_classifier_free_guidance and negative_prompt_embeds is None:
498
+ negative_prompt = negative_prompt or ""
499
+ negative_prompt_2 = negative_prompt_2 or negative_prompt
500
+ negative_prompt_3 = negative_prompt_3 or negative_prompt
501
+
502
+ # normalize str to list
503
+ negative_prompt = batch_size * [negative_prompt] if isinstance(negative_prompt, str) else negative_prompt
504
+ negative_prompt_2 = (
505
+ batch_size * [negative_prompt_2] if isinstance(negative_prompt_2, str) else negative_prompt_2
506
+ )
507
+ negative_prompt_3 = (
508
+ batch_size * [negative_prompt_3] if isinstance(negative_prompt_3, str) else negative_prompt_3
509
+ )
510
+
511
+ if prompt is not None and type(prompt) is not type(negative_prompt):
512
+ raise TypeError(
513
+ f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
514
+ f" {type(prompt)}."
515
+ )
516
+ elif batch_size != len(negative_prompt):
517
+ raise ValueError(
518
+ f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
519
+ f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
520
+ " the batch size of `prompt`."
521
+ )
522
+
523
+ negative_prompt_embed, negative_pooled_prompt_embed = self._get_clip_prompt_embeds(
524
+ negative_prompt,
525
+ device=device,
526
+ num_images_per_prompt=num_images_per_prompt,
527
+ clip_skip=None,
528
+ clip_model_index=0,
529
+ )
530
+ negative_prompt_2_embed, negative_pooled_prompt_2_embed = self._get_clip_prompt_embeds(
531
+ negative_prompt_2,
532
+ device=device,
533
+ num_images_per_prompt=num_images_per_prompt,
534
+ clip_skip=None,
535
+ clip_model_index=1,
536
+ )
537
+ negative_clip_prompt_embeds = torch.cat([negative_prompt_embed, negative_prompt_2_embed], dim=-1)
538
+
539
+ t5_negative_prompt_embed = self._get_t5_prompt_embeds(
540
+ prompt=negative_prompt_3,
541
+ num_images_per_prompt=num_images_per_prompt,
542
+ max_sequence_length=max_sequence_length,
543
+ device=device,
544
+ )
545
+
546
+ negative_clip_prompt_embeds = torch.nn.functional.pad(
547
+ negative_clip_prompt_embeds,
548
+ (0, t5_negative_prompt_embed.shape[-1] - negative_clip_prompt_embeds.shape[-1]),
549
+ )
550
+
551
+ negative_prompt_embeds = torch.cat([negative_clip_prompt_embeds, t5_negative_prompt_embed], dim=-2)
552
+ negative_pooled_prompt_embeds = torch.cat(
553
+ [negative_pooled_prompt_embed, negative_pooled_prompt_2_embed], dim=-1
554
+ )
555
+
556
+ if self.text_encoder is not None:
557
+ if isinstance(self, SD3LoraLoaderMixin) and USE_PEFT_BACKEND:
558
+ # Retrieve the original scale by scaling back the LoRA layers
559
+ unscale_lora_layers(self.text_encoder, lora_scale)
560
+
561
+ if self.text_encoder_2 is not None:
562
+ if isinstance(self, SD3LoraLoaderMixin) and USE_PEFT_BACKEND:
563
+ # Retrieve the original scale by scaling back the LoRA layers
564
+ unscale_lora_layers(self.text_encoder_2, lora_scale)
565
+
566
+ return prompt_embeds, negative_prompt_embeds, pooled_prompt_embeds, negative_pooled_prompt_embeds
567
+
568
+ def check_inputs(
569
+ self,
570
+ prompt,
571
+ prompt_2,
572
+ prompt_3,
573
+ height,
574
+ width,
575
+ negative_prompt=None,
576
+ negative_prompt_2=None,
577
+ negative_prompt_3=None,
578
+ prompt_embeds=None,
579
+ negative_prompt_embeds=None,
580
+ pooled_prompt_embeds=None,
581
+ negative_pooled_prompt_embeds=None,
582
+ callback_on_step_end_tensor_inputs=None,
583
+ max_sequence_length=None,
584
+ ):
585
+ if (
586
+ height % (self.vae_scale_factor * self.patch_size) != 0
587
+ or width % (self.vae_scale_factor * self.patch_size) != 0
588
+ ):
589
+ raise ValueError(
590
+ f"`height` and `width` have to be divisible by {self.vae_scale_factor * self.patch_size} but are {height} and {width}."
591
+ f"You can use height {height - height % (self.vae_scale_factor * self.patch_size)} and width {width - width % (self.vae_scale_factor * self.patch_size)}."
592
+ )
593
+
594
+ if callback_on_step_end_tensor_inputs is not None and not all(
595
+ k in self._callback_tensor_inputs for k in callback_on_step_end_tensor_inputs
596
+ ):
597
+ raise ValueError(
598
+ f"`callback_on_step_end_tensor_inputs` has to be in {self._callback_tensor_inputs}, but found {[k for k in callback_on_step_end_tensor_inputs if k not in self._callback_tensor_inputs]}"
599
+ )
600
+
601
+ if prompt is not None and prompt_embeds is not None:
602
+ raise ValueError(
603
+ f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
604
+ " only forward one of the two."
605
+ )
606
+ elif prompt_2 is not None and prompt_embeds is not None:
607
+ raise ValueError(
608
+ f"Cannot forward both `prompt_2`: {prompt_2} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
609
+ " only forward one of the two."
610
+ )
611
+ elif prompt_3 is not None and prompt_embeds is not None:
612
+ raise ValueError(
613
+ f"Cannot forward both `prompt_3`: {prompt_2} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
614
+ " only forward one of the two."
615
+ )
616
+ elif prompt is None and prompt_embeds is None:
617
+ raise ValueError(
618
+ "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined."
619
+ )
620
+ elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)):
621
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
622
+ elif prompt_2 is not None and (not isinstance(prompt_2, str) and not isinstance(prompt_2, list)):
623
+ raise ValueError(f"`prompt_2` has to be of type `str` or `list` but is {type(prompt_2)}")
624
+ elif prompt_3 is not None and (not isinstance(prompt_3, str) and not isinstance(prompt_3, list)):
625
+ raise ValueError(f"`prompt_3` has to be of type `str` or `list` but is {type(prompt_3)}")
626
+
627
+ if negative_prompt is not None and negative_prompt_embeds is not None:
628
+ raise ValueError(
629
+ f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:"
630
+ f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
631
+ )
632
+ elif negative_prompt_2 is not None and negative_prompt_embeds is not None:
633
+ raise ValueError(
634
+ f"Cannot forward both `negative_prompt_2`: {negative_prompt_2} and `negative_prompt_embeds`:"
635
+ f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
636
+ )
637
+ elif negative_prompt_3 is not None and negative_prompt_embeds is not None:
638
+ raise ValueError(
639
+ f"Cannot forward both `negative_prompt_3`: {negative_prompt_3} and `negative_prompt_embeds`:"
640
+ f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
641
+ )
642
+
643
+ if prompt_embeds is not None and negative_prompt_embeds is not None:
644
+ if prompt_embeds.shape != negative_prompt_embeds.shape:
645
+ raise ValueError(
646
+ "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but"
647
+ f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`"
648
+ f" {negative_prompt_embeds.shape}."
649
+ )
650
+
651
+ if prompt_embeds is not None and pooled_prompt_embeds is None:
652
+ raise ValueError(
653
+ "If `prompt_embeds` are provided, `pooled_prompt_embeds` also have to be passed. Make sure to generate `pooled_prompt_embeds` from the same text encoder that was used to generate `prompt_embeds`."
654
+ )
655
+
656
+ if negative_prompt_embeds is not None and negative_pooled_prompt_embeds is None:
657
+ raise ValueError(
658
+ "If `negative_prompt_embeds` are provided, `negative_pooled_prompt_embeds` also have to be passed. Make sure to generate `negative_pooled_prompt_embeds` from the same text encoder that was used to generate `negative_prompt_embeds`."
659
+ )
660
+
661
+ if max_sequence_length is not None and max_sequence_length > 512:
662
+ raise ValueError(f"`max_sequence_length` cannot be greater than 512 but is {max_sequence_length}")
663
+
664
+ def prepare_latents(
665
+ self,
666
+ batch_size,
667
+ num_channels_latents,
668
+ height,
669
+ width,
670
+ dtype,
671
+ device,
672
+ generator,
673
+ latents=None,
674
+ ):
675
+ if latents is not None:
676
+ return latents.to(device=device, dtype=dtype)
677
+
678
+ shape = (
679
+ batch_size,
680
+ num_channels_latents,
681
+ int(height) // self.vae_scale_factor,
682
+ int(width) // self.vae_scale_factor,
683
+ )
684
+
685
+ if isinstance(generator, list) and len(generator) != batch_size:
686
+ raise ValueError(
687
+ f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
688
+ f" size of {batch_size}. Make sure the batch size matches the length of the generators."
689
+ )
690
+
691
+ latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
692
+
693
+ return latents
694
+
695
+ def prepare_image_latents(
696
+ self,
697
+ image,
698
+ batch_size,
699
+ num_images_per_prompt,
700
+ dtype,
701
+ device,
702
+ generator,
703
+ do_classifier_free_guidance,
704
+ ):
705
+ if not isinstance(image, (torch.Tensor, PIL.Image.Image, list)):
706
+ raise ValueError(
707
+ f"`image` has to be of type `torch.Tensor`, `PIL.Image.Image` or list but is {type(image)}"
708
+ )
709
+
710
+ image = image.to(device=device, dtype=dtype)
711
+
712
+ batch_size = batch_size * num_images_per_prompt
713
+
714
+ if image.shape[1] == self.vae.config.latent_channels:
715
+ image_latents = image
716
+ else:
717
+ image_latents = retrieve_latents(self.vae.encode(image), sample_mode="argmax", generator=generator)
718
+
719
+ image_latents = (image_latents - self.vae.config.shift_factor) * self.vae.config.scaling_factor
720
+
721
+ if batch_size > image_latents.shape[0] and batch_size % image_latents.shape[0] == 0:
722
+ # expand image_latents for batch_size
723
+ deprecation_message = (
724
+ f"You have passed {batch_size} text prompts (`prompt`), but only {image_latents.shape[0]} initial"
725
+ " images (`image`). Initial images are now duplicating to match the number of text prompts. Note"
726
+ " that this behavior is deprecated and will be removed in a version 1.0.0. Please make sure to update"
727
+ " your script to pass as many initial images as text prompts to suppress this warning."
728
+ )
729
+ deprecate("len(prompt) != len(image)", "1.0.0", deprecation_message, standard_warn=False)
730
+ additional_image_per_prompt = batch_size // image_latents.shape[0]
731
+ image_latents = torch.cat([image_latents] * additional_image_per_prompt, dim=0)
732
+ elif batch_size > image_latents.shape[0] and batch_size % image_latents.shape[0] != 0:
733
+ raise ValueError(
734
+ f"Cannot duplicate `image` of batch size {image_latents.shape[0]} to {batch_size} text prompts."
735
+ )
736
+ else:
737
+ image_latents = torch.cat([image_latents], dim=0)
738
+
739
+ if do_classifier_free_guidance:
740
+ uncond_image_latents = torch.zeros_like(image_latents)
741
+ image_latents = torch.cat([image_latents, image_latents, uncond_image_latents], dim=0)
742
+
743
+ return image_latents
744
+
745
+ @property
746
+ def guidance_scale(self):
747
+ return self._guidance_scale
748
+
749
+ @property
750
+ def image_guidance_scale(self):
751
+ return self._image_guidance_scale
752
+
753
+ @property
754
+ def skip_guidance_layers(self):
755
+ return self._skip_guidance_layers
756
+
757
+ @property
758
+ def clip_skip(self):
759
+ return self._clip_skip
760
+
761
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
762
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
763
+ # corresponds to doing no classifier free guidance.
764
+ @property
765
+ def do_classifier_free_guidance(self):
766
+ return self._guidance_scale > 1.0 and self.image_guidance_scale >= 1.0
767
+
768
+ @property
769
+ def joint_attention_kwargs(self):
770
+ return self._joint_attention_kwargs
771
+
772
+ @property
773
+ def num_timesteps(self):
774
+ return self._num_timesteps
775
+
776
+ @property
777
+ def interrupt(self):
778
+ return self._interrupt
779
+
780
+ # Adapted from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_xl.StableDiffusionXLPipeline.encode_image
781
+ def encode_image(self, image: PipelineImageInput, device: torch.device) -> torch.Tensor:
782
+ """Encodes the given image into a feature representation using a pre-trained image encoder.
783
+
784
+ Args:
785
+ image (`PipelineImageInput`):
786
+ Input image to be encoded.
787
+ device: (`torch.device`):
788
+ Torch device.
789
+
790
+ Returns:
791
+ `torch.Tensor`: The encoded image feature representation.
792
+ """
793
+ if not isinstance(image, torch.Tensor):
794
+ image = self.feature_extractor(image, return_tensors="pt").pixel_values
795
+
796
+ image = image.to(device=device, dtype=self.dtype)
797
+
798
+ return self.image_encoder(image, output_hidden_states=True).hidden_states[-2]
799
+
800
+ # Adapted from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_xl.StableDiffusionXLPipeline.prepare_ip_adapter_image_embeds
801
+ def prepare_ip_adapter_image_embeds(
802
+ self,
803
+ ip_adapter_image: Optional[PipelineImageInput] = None,
804
+ ip_adapter_image_embeds: Optional[torch.Tensor] = None,
805
+ device: Optional[torch.device] = None,
806
+ num_images_per_prompt: int = 1,
807
+ do_classifier_free_guidance: bool = True,
808
+ ) -> torch.Tensor:
809
+ """Prepares image embeddings for use in the IP-Adapter.
810
+
811
+ Either `ip_adapter_image` or `ip_adapter_image_embeds` must be passed.
812
+
813
+ Args:
814
+ ip_adapter_image (`PipelineImageInput`, *optional*):
815
+ The input image to extract features from for IP-Adapter.
816
+ ip_adapter_image_embeds (`torch.Tensor`, *optional*):
817
+ Precomputed image embeddings.
818
+ device: (`torch.device`, *optional*):
819
+ Torch device.
820
+ num_images_per_prompt (`int`, defaults to 1):
821
+ Number of images that should be generated per prompt.
822
+ do_classifier_free_guidance (`bool`, defaults to True):
823
+ Whether to use classifier free guidance or not.
824
+ """
825
+ device = device or self._execution_device
826
+
827
+ if ip_adapter_image_embeds is not None:
828
+ if do_classifier_free_guidance:
829
+ single_negative_image_embeds, single_image_embeds = ip_adapter_image_embeds.chunk(2)
830
+ else:
831
+ single_image_embeds = ip_adapter_image_embeds
832
+ elif ip_adapter_image is not None:
833
+ single_image_embeds = self.encode_image(ip_adapter_image, device)
834
+ if do_classifier_free_guidance:
835
+ single_negative_image_embeds = torch.zeros_like(single_image_embeds)
836
+ else:
837
+ raise ValueError("Neither `ip_adapter_image_embeds` or `ip_adapter_image_embeds` were provided.")
838
+
839
+ image_embeds = torch.cat([single_image_embeds] * num_images_per_prompt, dim=0)
840
+
841
+ if do_classifier_free_guidance:
842
+ negative_image_embeds = torch.cat([single_negative_image_embeds] * num_images_per_prompt, dim=0)
843
+ image_embeds = torch.cat([negative_image_embeds, image_embeds], dim=0)
844
+
845
+ return image_embeds.to(device=device)
846
+
847
+ def enable_sequential_cpu_offload(self, *args, **kwargs):
848
+ if self.image_encoder is not None and "image_encoder" not in self._exclude_from_cpu_offload:
849
+ logger.warning(
850
+ "`pipe.enable_sequential_cpu_offload()` might fail for `image_encoder` if it uses "
851
+ "`torch.nn.MultiheadAttention`. You can exclude `image_encoder` from CPU offloading by calling "
852
+ "`pipe._exclude_from_cpu_offload.append('image_encoder')` before `pipe.enable_sequential_cpu_offload()`."
853
+ )
854
+
855
+ super().enable_sequential_cpu_offload(*args, **kwargs)
856
+
857
+ @torch.no_grad()
858
+ @replace_example_docstring(EXAMPLE_DOC_STRING)
859
+ def __call__(
860
+ self,
861
+ prompt: Union[str, List[str]] = None,
862
+ prompt_2: Optional[Union[str, List[str]]] = None,
863
+ prompt_3: Optional[Union[str, List[str]]] = None,
864
+ image: PipelineImageInput = None,
865
+ height: Optional[int] = None,
866
+ width: Optional[int] = None,
867
+ num_inference_steps: int = 28,
868
+ sigmas: Optional[List[float]] = None,
869
+ guidance_scale: float = 7.0,
870
+ image_guidance_scale: float = 1.5,
871
+ negative_prompt: Optional[Union[str, List[str]]] = None,
872
+ negative_prompt_2: Optional[Union[str, List[str]]] = None,
873
+ negative_prompt_3: Optional[Union[str, List[str]]] = None,
874
+ num_images_per_prompt: Optional[int] = 1,
875
+ generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
876
+ latents: Optional[torch.FloatTensor] = None,
877
+ prompt_embeds: Optional[torch.FloatTensor] = None,
878
+ negative_prompt_embeds: Optional[torch.FloatTensor] = None,
879
+ pooled_prompt_embeds: Optional[torch.FloatTensor] = None,
880
+ negative_pooled_prompt_embeds: Optional[torch.FloatTensor] = None,
881
+ ip_adapter_image: Optional[PipelineImageInput] = None,
882
+ ip_adapter_image_embeds: Optional[torch.Tensor] = None,
883
+ output_type: Optional[str] = "pil",
884
+ return_dict: bool = True,
885
+ joint_attention_kwargs: Optional[Dict[str, Any]] = None,
886
+ clip_skip: Optional[int] = None,
887
+ callback_on_step_end: Optional[Callable[[int, int, Dict], None]] = None,
888
+ callback_on_step_end_tensor_inputs: List[str] = ["latents"],
889
+ max_sequence_length: int = 256,
890
+ skip_guidance_layers: List[int] = None,
891
+ skip_layer_guidance_scale: float = 2.8,
892
+ skip_layer_guidance_stop: float = 0.2,
893
+ skip_layer_guidance_start: float = 0.01,
894
+ mu: Optional[float] = None,
895
+ ):
896
+ r"""
897
+ Function invoked when calling the pipeline for generation.
898
+
899
+ Args:
900
+ prompt (`str` or `List[str]`, *optional*):
901
+ The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
902
+ instead.
903
+ prompt_2 (`str` or `List[str]`, *optional*):
904
+ The prompt or prompts to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
905
+ will be used instead
906
+ prompt_3 (`str` or `List[str]`, *optional*):
907
+ The prompt or prompts to be sent to `tokenizer_3` and `text_encoder_3`. If not defined, `prompt` is
908
+ will be used instead
909
+ height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
910
+ The height in pixels of the generated image. This is set to 1024 by default for the best results.
911
+ width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
912
+ The width in pixels of the generated image. This is set to 1024 by default for the best results.
913
+ num_inference_steps (`int`, *optional*, defaults to 50):
914
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
915
+ expense of slower inference.
916
+ sigmas (`List[float]`, *optional*):
917
+ Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in
918
+ their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
919
+ will be used.
920
+ guidance_scale (`float`, *optional*, defaults to 7.0):
921
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
922
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
923
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
924
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
925
+ usually at the expense of lower image quality.
926
+ image_guidance_scale (`float`, *optional*, defaults to 1.5):
927
+ Image guidance scale is to push the generated image towards the initial image `image`. Image guidance
928
+ scale is enabled by setting `image_guidance_scale > 1`. Higher image guidance scale encourages to
929
+ generate images that are closely linked to the source image `image`, usually at the expense of lower
930
+ negative_prompt (`str` or `List[str]`, *optional*):
931
+ The prompt or prompts not to guide the image generation. If not defined, one has to pass
932
+ `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
933
+ less than `1`).
934
+ negative_prompt_2 (`str` or `List[str]`, *optional*):
935
+ The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
936
+ `text_encoder_2`. If not defined, `negative_prompt` is used instead
937
+ negative_prompt_3 (`str` or `List[str]`, *optional*):
938
+ The prompt or prompts not to guide the image generation to be sent to `tokenizer_3` and
939
+ `text_encoder_3`. If not defined, `negative_prompt` is used instead
940
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
941
+ The number of images to generate per prompt.
942
+ generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
943
+ One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
944
+ to make generation deterministic.
945
+ latents (`torch.FloatTensor`, *optional*):
946
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
947
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
948
+ tensor will ge generated by sampling using the supplied random `generator`.
949
+ prompt_embeds (`torch.FloatTensor`, *optional*):
950
+ Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
951
+ provided, text embeddings will be generated from `prompt` input argument.
952
+ negative_prompt_embeds (`torch.FloatTensor`, *optional*):
953
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
954
+ weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
955
+ argument.
956
+ pooled_prompt_embeds (`torch.FloatTensor`, *optional*):
957
+ Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
958
+ If not provided, pooled text embeddings will be generated from `prompt` input argument.
959
+ negative_pooled_prompt_embeds (`torch.FloatTensor`, *optional*):
960
+ Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
961
+ weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
962
+ input argument.
963
+ ip_adapter_image (`PipelineImageInput`, *optional*):
964
+ Optional image input to work with IP Adapters.
965
+ ip_adapter_image_embeds (`torch.Tensor`, *optional*):
966
+ Pre-generated image embeddings for IP-Adapter. Should be a tensor of shape `(batch_size, num_images,
967
+ emb_dim)`. It should contain the negative image embedding if `do_classifier_free_guidance` is set to
968
+ `True`. If not provided, embeddings are computed from the `ip_adapter_image` input argument.
969
+ output_type (`str`, *optional*, defaults to `"pil"`):
970
+ The output format of the generate image. Choose between
971
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
972
+ return_dict (`bool`, *optional*, defaults to `True`):
973
+ Whether or not to return a [`~pipelines.stable_diffusion_3.StableDiffusion3PipelineOutput`] instead of
974
+ a plain tuple.
975
+ joint_attention_kwargs (`dict`, *optional*):
976
+ A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
977
+ `self.processor` in
978
+ [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
979
+ callback_on_step_end (`Callable`, *optional*):
980
+ A function that calls at the end of each denoising steps during the inference. The function is called
981
+ with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int,
982
+ callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by
983
+ `callback_on_step_end_tensor_inputs`.
984
+ callback_on_step_end_tensor_inputs (`List`, *optional*):
985
+ The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list
986
+ will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the
987
+ `._callback_tensor_inputs` attribute of your pipeline class.
988
+ max_sequence_length (`int` defaults to 256): Maximum sequence length to use with the `prompt`.
989
+ skip_guidance_layers (`List[int]`, *optional*):
990
+ A list of integers that specify layers to skip during guidance. If not provided, all layers will be
991
+ used for guidance. If provided, the guidance will only be applied to the layers specified in the list.
992
+ Recommended value by StabiltyAI for Stable Diffusion 3.5 Medium is [7, 8, 9].
993
+ skip_layer_guidance_scale (`int`, *optional*): The scale of the guidance for the layers specified in
994
+ `skip_guidance_layers`. The guidance will be applied to the layers specified in `skip_guidance_layers`
995
+ with a scale of `skip_layer_guidance_scale`. The guidance will be applied to the rest of the layers
996
+ with a scale of `1`.
997
+ skip_layer_guidance_stop (`int`, *optional*): The step at which the guidance for the layers specified in
998
+ `skip_guidance_layers` will stop. The guidance will be applied to the layers specified in
999
+ `skip_guidance_layers` until the fraction specified in `skip_layer_guidance_stop`. Recommended value by
1000
+ StabiltyAI for Stable Diffusion 3.5 Medium is 0.2.
1001
+ skip_layer_guidance_start (`int`, *optional*): The step at which the guidance for the layers specified in
1002
+ `skip_guidance_layers` will start. The guidance will be applied to the layers specified in
1003
+ `skip_guidance_layers` from the fraction specified in `skip_layer_guidance_start`. Recommended value by
1004
+ StabiltyAI for Stable Diffusion 3.5 Medium is 0.01.
1005
+ mu (`float`, *optional*): `mu` value used for `dynamic_shifting`.
1006
+
1007
+ Examples:
1008
+
1009
+ Returns:
1010
+ [`~pipelines.stable_diffusion_3.StableDiffusion3PipelineOutput`] or `tuple`:
1011
+ [`~pipelines.stable_diffusion_3.StableDiffusion3PipelineOutput`] if `return_dict` is True, otherwise a
1012
+ `tuple`. When returning a tuple, the first element is a list with the generated images.
1013
+ """
1014
+
1015
+ height = height or self.default_sample_size * self.vae_scale_factor
1016
+ width = width or self.default_sample_size * self.vae_scale_factor
1017
+
1018
+ # 1. Check inputs. Raise error if not correct
1019
+ self.check_inputs(
1020
+ prompt,
1021
+ prompt_2,
1022
+ prompt_3,
1023
+ height,
1024
+ width,
1025
+ negative_prompt=negative_prompt,
1026
+ negative_prompt_2=negative_prompt_2,
1027
+ negative_prompt_3=negative_prompt_3,
1028
+ prompt_embeds=prompt_embeds,
1029
+ negative_prompt_embeds=negative_prompt_embeds,
1030
+ pooled_prompt_embeds=pooled_prompt_embeds,
1031
+ negative_pooled_prompt_embeds=negative_pooled_prompt_embeds,
1032
+ callback_on_step_end_tensor_inputs=callback_on_step_end_tensor_inputs,
1033
+ max_sequence_length=max_sequence_length,
1034
+ )
1035
+
1036
+ self._guidance_scale = guidance_scale
1037
+ self._image_guidance_scale = image_guidance_scale
1038
+ self._skip_layer_guidance_scale = skip_layer_guidance_scale
1039
+ self._clip_skip = clip_skip
1040
+ self._joint_attention_kwargs = joint_attention_kwargs
1041
+ self._interrupt = False
1042
+
1043
+ # 2. Define call parameters
1044
+ if prompt is not None and isinstance(prompt, str):
1045
+ batch_size = 1
1046
+ elif prompt is not None and isinstance(prompt, list):
1047
+ batch_size = len(prompt)
1048
+ else:
1049
+ batch_size = prompt_embeds.shape[0]
1050
+
1051
+ device = self._execution_device
1052
+
1053
+ lora_scale = (
1054
+ self.joint_attention_kwargs.get("scale", None) if self.joint_attention_kwargs is not None else None
1055
+ )
1056
+ (
1057
+ prompt_embeds,
1058
+ negative_prompt_embeds,
1059
+ pooled_prompt_embeds,
1060
+ negative_pooled_prompt_embeds,
1061
+ ) = self.encode_prompt(
1062
+ prompt=prompt,
1063
+ prompt_2=prompt_2,
1064
+ prompt_3=prompt_3,
1065
+ negative_prompt=negative_prompt,
1066
+ negative_prompt_2=negative_prompt_2,
1067
+ negative_prompt_3=negative_prompt_3,
1068
+ do_classifier_free_guidance=self.do_classifier_free_guidance,
1069
+ prompt_embeds=prompt_embeds,
1070
+ negative_prompt_embeds=negative_prompt_embeds,
1071
+ pooled_prompt_embeds=pooled_prompt_embeds,
1072
+ negative_pooled_prompt_embeds=negative_pooled_prompt_embeds,
1073
+ device=device,
1074
+ clip_skip=self.clip_skip,
1075
+ num_images_per_prompt=num_images_per_prompt,
1076
+ max_sequence_length=max_sequence_length,
1077
+ lora_scale=lora_scale,
1078
+ )
1079
+
1080
+ if self.do_classifier_free_guidance:
1081
+ if skip_guidance_layers is not None:
1082
+ original_prompt_embeds = prompt_embeds
1083
+ original_pooled_prompt_embeds = pooled_prompt_embeds
1084
+ # The extra concat similar to how it's done in SD InstructPix2Pix.
1085
+ prompt_embeds = torch.cat([prompt_embeds, negative_prompt_embeds, negative_prompt_embeds], dim=0)
1086
+ pooled_prompt_embeds = torch.cat(
1087
+ [pooled_prompt_embeds, negative_pooled_prompt_embeds, negative_pooled_prompt_embeds], dim=0
1088
+ )
1089
+
1090
+ # 4. Prepare latent variables
1091
+ num_channels_latents = self.vae.config.latent_channels
1092
+ latents = self.prepare_latents(
1093
+ batch_size * num_images_per_prompt,
1094
+ num_channels_latents,
1095
+ height,
1096
+ width,
1097
+ prompt_embeds.dtype,
1098
+ device,
1099
+ generator,
1100
+ latents,
1101
+ )
1102
+ # 5. Prepare image latents
1103
+ image = self.image_processor.preprocess(image)
1104
+ image_latents = self.prepare_image_latents(
1105
+ image,
1106
+ batch_size,
1107
+ num_images_per_prompt,
1108
+ prompt_embeds.dtype,
1109
+ device,
1110
+ generator,
1111
+ self.do_classifier_free_guidance,
1112
+ )
1113
+
1114
+ # 6. Check that shapes of latents and image match the DiT (SD3) in_channels
1115
+ num_channels_image = image_latents.shape[1]
1116
+ if num_channels_latents + num_channels_image != self.transformer.config.in_channels:
1117
+ raise ValueError(
1118
+ f"Incorrect configuration settings! The config of `pipeline.transformer`: {self.transformer.config} expects"
1119
+ f" {self.transformer.config.in_channels} but received `num_channels_latents`: {num_channels_latents} +"
1120
+ f" `num_channels_image`: {num_channels_image} "
1121
+ f" = {num_channels_latents + num_channels_image}. Please verify the config of"
1122
+ " `pipeline.transformer` or your `image` input."
1123
+ )
1124
+
1125
+ # 7. Prepare timesteps
1126
+ scheduler_kwargs = {}
1127
+ if self.scheduler.config.get("use_dynamic_shifting", None) and mu is None:
1128
+ _, _, height, width = latents.shape
1129
+ image_seq_len = (height // self.transformer.config.patch_size) * (
1130
+ width // self.transformer.config.patch_size
1131
+ )
1132
+ mu = calculate_shift(
1133
+ image_seq_len,
1134
+ self.scheduler.config.get("base_image_seq_len", 256),
1135
+ self.scheduler.config.get("max_image_seq_len", 4096),
1136
+ self.scheduler.config.get("base_shift", 0.5),
1137
+ self.scheduler.config.get("max_shift", 1.16),
1138
+ )
1139
+ scheduler_kwargs["mu"] = mu
1140
+ elif mu is not None:
1141
+ scheduler_kwargs["mu"] = mu
1142
+ timesteps, num_inference_steps = retrieve_timesteps(
1143
+ self.scheduler,
1144
+ num_inference_steps,
1145
+ device,
1146
+ sigmas=sigmas,
1147
+ **scheduler_kwargs,
1148
+ )
1149
+ num_warmup_steps = max(len(timesteps) - num_inference_steps * self.scheduler.order, 0)
1150
+ self._num_timesteps = len(timesteps)
1151
+
1152
+ # 8. Prepare image embeddings
1153
+ if (ip_adapter_image is not None and self.is_ip_adapter_active) or ip_adapter_image_embeds is not None:
1154
+ ip_adapter_image_embeds = self.prepare_ip_adapter_image_embeds(
1155
+ ip_adapter_image,
1156
+ ip_adapter_image_embeds,
1157
+ device,
1158
+ batch_size * num_images_per_prompt,
1159
+ self.do_classifier_free_guidance,
1160
+ )
1161
+
1162
+ if self.joint_attention_kwargs is None:
1163
+ self._joint_attention_kwargs = {"ip_adapter_image_embeds": ip_adapter_image_embeds}
1164
+ else:
1165
+ self._joint_attention_kwargs.update(ip_adapter_image_embeds=ip_adapter_image_embeds)
1166
+
1167
+ # 9. Denoising loop
1168
+ with self.progress_bar(total=num_inference_steps) as progress_bar:
1169
+ for i, t in enumerate(timesteps):
1170
+ if self.interrupt:
1171
+ continue
1172
+
1173
+ # expand the latents if we are doing classifier free guidance
1174
+ # The latents are expanded 3 times because for pix2pix the guidance
1175
+ # is applied for both the text and the input image.
1176
+ latent_model_input = torch.cat([latents] * 3) if self.do_classifier_free_guidance else latents
1177
+ # broadcast to batch dimension in a way that's compatible with ONNX/Core ML
1178
+ timestep = t.expand(latent_model_input.shape[0])
1179
+ scaled_latent_model_input = torch.cat([latent_model_input, image_latents], dim=1)
1180
+
1181
+ noise_pred = self.transformer(
1182
+ hidden_states=scaled_latent_model_input,
1183
+ timestep=timestep,
1184
+ encoder_hidden_states=prompt_embeds,
1185
+ pooled_projections=pooled_prompt_embeds,
1186
+ joint_attention_kwargs=self.joint_attention_kwargs,
1187
+ return_dict=False,
1188
+ )[0]
1189
+
1190
+ # perform guidance
1191
+ if self.do_classifier_free_guidance:
1192
+ noise_pred_text, noise_pred_image, noise_pred_uncond = noise_pred.chunk(3)
1193
+ noise_pred = (
1194
+ noise_pred_uncond
1195
+ + self.guidance_scale * (noise_pred_text - noise_pred_image)
1196
+ + self.image_guidance_scale * (noise_pred_image - noise_pred_uncond)
1197
+ )
1198
+ should_skip_layers = (
1199
+ True
1200
+ if i > num_inference_steps * skip_layer_guidance_start
1201
+ and i < num_inference_steps * skip_layer_guidance_stop
1202
+ else False
1203
+ )
1204
+ if skip_guidance_layers is not None and should_skip_layers:
1205
+ timestep = t.expand(latents.shape[0])
1206
+ latent_model_input = latents
1207
+ noise_pred_skip_layers = self.transformer(
1208
+ hidden_states=latent_model_input,
1209
+ timestep=timestep,
1210
+ encoder_hidden_states=original_prompt_embeds,
1211
+ pooled_projections=original_pooled_prompt_embeds,
1212
+ joint_attention_kwargs=self.joint_attention_kwargs,
1213
+ return_dict=False,
1214
+ skip_layers=skip_guidance_layers,
1215
+ )[0]
1216
+ noise_pred = (
1217
+ noise_pred + (noise_pred_text - noise_pred_skip_layers) * self._skip_layer_guidance_scale
1218
+ )
1219
+
1220
+ # compute the previous noisy sample x_t -> x_t-1
1221
+ latents_dtype = latents.dtype
1222
+ latents = self.scheduler.step(noise_pred, t, latents, return_dict=False)[0]
1223
+
1224
+ if latents.dtype != latents_dtype:
1225
+ if torch.backends.mps.is_available():
1226
+ # some platforms (eg. apple mps) misbehave due to a pytorch bug: https://github.com/pytorch/pytorch/pull/99272
1227
+ latents = latents.to(latents_dtype)
1228
+
1229
+ if callback_on_step_end is not None:
1230
+ callback_kwargs = {}
1231
+ for k in callback_on_step_end_tensor_inputs:
1232
+ callback_kwargs[k] = locals()[k]
1233
+ callback_outputs = callback_on_step_end(self, i, t, callback_kwargs)
1234
+
1235
+ latents = callback_outputs.pop("latents", latents)
1236
+ prompt_embeds = callback_outputs.pop("prompt_embeds", prompt_embeds)
1237
+ negative_prompt_embeds = callback_outputs.pop("negative_prompt_embeds", negative_prompt_embeds)
1238
+ negative_pooled_prompt_embeds = callback_outputs.pop(
1239
+ "negative_pooled_prompt_embeds", negative_pooled_prompt_embeds
1240
+ )
1241
+ image_latents = callback_outputs.pop("image_latents", image_latents)
1242
+
1243
+ # call the callback, if provided
1244
+ if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
1245
+ progress_bar.update()
1246
+
1247
+ if XLA_AVAILABLE:
1248
+ xm.mark_step()
1249
+
1250
+ if output_type == "latent":
1251
+ image = latents
1252
+
1253
+ else:
1254
+ latents = (latents / self.vae.config.scaling_factor) + self.vae.config.shift_factor
1255
+ latents = latents.to(dtype=self.vae.dtype)
1256
+
1257
+ image = self.vae.decode(latents, return_dict=False)[0]
1258
+ image = self.image_processor.postprocess(image, output_type=output_type)
1259
+
1260
+ # Offload all models
1261
+ self.maybe_free_model_hooks()
1262
+
1263
+ if not return_dict:
1264
+ return (image,)
1265
+
1266
+ return StableDiffusion3PipelineOutput(images=image)