Melanie Laurent Lora Flux NF4

- Prompt
- Training With QLoRA: Melanie Laurent at an event, set against a clean white backdrop. She features long, wavy blonde hair cascading over her shoulders and is dressed in a black jacket with a collar and buttoned front. While her face is blurred, a subtle smile is visible. The backdrop itself is a plain white surface adorned with lines of gold text and likely pertains to the event or organization. The combination of the woman's attire and the elegant backdrop, the white surface contrasted with the gold text, suggests a formal setting, potentially a fashion or design related gala. The overall aesthetic conveys a sense of sophistication and refinement.

- Prompt
- Training Without QLoRA: Melanie Laurent at an event, set against a clean white backdrop. She features long, wavy blonde hair cascading over her shoulders and is dressed in a black jacket with a collar and buttoned front. While her face is blurred, a subtle smile is visible. The backdrop itself is a plain white surface adorned with lines of gold text and likely pertains to the event or organization. The combination of the woman's attire and the elegant backdrop, the white surface contrasted with the gold text, suggests a formal setting, potentially a fashion or design related gala. The overall aesthetic conveys a sense of sophistication and refinement.

- Prompt
- Testing With QLoRA: Melanie Laurent as a maid, looks back over her shoulder with a playful smile. She wears an ultra-short miniskirt that shows off her sculpted glutes, paired with a tight, form-fitting blouse.

- Prompt
- Testing Without QLoRA: Melanie Laurent as a maid, looks back over her shoulder with a playful smile. She wears an ultra-short miniskirt that shows off her sculpted glutes, paired with a tight, form-fitting blouse.
The QLoRA fine-tuning process of melanie_laurent_lora_flux_nf4
takes inspiration from this post (https://huggingface.co/blog/diffusers-quantization). The training was executed on a local computer with 1000 timesteps and the same parameters as the link mentioned above, which took around 6 hours on 8GB VRAM 4060. The peak VRAM usage was around 7.7GB. To avoid running low on VRAM, both transformers and text_encoder were quantized. All the images generated here are using the below parameters
- Height: 512
- Width: 512
- Guidance scale: 5
- Num inference steps: 20
- Max sequence length: 512
- Seed: 0
Usage
import torch
from diffusers import FluxPipeline, FluxTransformer2DModel
from transformers import T5EncoderModel
text_encoder_4bit = T5EncoderModel.from_pretrained(
"hf-internal-testing/flux.1-dev-nf4-pkg", subfolder="text_encoder_2",torch_dtype=torch.float16,)
transformer_4bit = FluxTransformer2DModel.from_pretrained(
"hf-internal-testing/flux.1-dev-nf4-pkg", subfolder="transformer",torch_dtype=torch.float16,)
pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.float16,
transformer=transformer_4bit,text_encoder_2=text_encoder_4bit)
pipe.load_lora_weights("je-suis-tm/melanie_laurent_lora_flux_nf4",
weight_name='pytorch_lora_weights.safetensors')
prompt="Melanie Laurent as a maid, looks back over her shoulder with a playful smile. She wears an ultra-short miniskirt that shows off her sculpted glutes, paired with a tight, form-fitting blouse."
image = pipe(
prompt,
height=512,
width=512,
guidance_scale=5,
num_inference_steps=20,
max_sequence_length=512,
generator=torch.Generator("cpu").manual_seed(0),
).images[0]
image.save("melanie_laurent_lora_flux_nf4.png")
Trigger words
You should use Melanie Laurent
to trigger the image generation.
Download model
Download them in the Files & versions tab.
- Downloads last month
- 10
Model tree for je-suis-tm/melanie_laurent_lora_flux_nf4
Base model
black-forest-labs/FLUX.1-dev