-
Notifications
You must be signed in to change notification settings - Fork 6k
lora_scale
has no effect when loading with Flux
#9525
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi, I never use that method, can you test with this? pipeline.load_lora_weights(lora_path, weight_name=weight_name, adapter_name="toy")
pipe.set_adapters("toy", 0.5) And yeah, the Flux pipeline doesn't have |
Your suggestion In this guide I see the following code block:
which says to pass a dictionary in the the |
Hey. I see that you're trying to load On another note (for the diffusers devs), I would assume that there would be a warning when trying to load a lora with incompatible keys (maybe a list of all keys that were incompatible after our A1111/Kohya converters tried and couldn't find a match). Is this not the case? @yiyixuxu |
oh, I didn't even check the lora, IMO we should throw a warning if someone tries to load a different arch lora, at least the times I by mistake loaded SD 1.5 loras with SDXL I got the incorrect keys error.
yes, if you pass a scalar instead of a dictionary, it applies the scale to all the layers. |
That's a good point, thanks for catching that. I swapped in a new one from here:
and tried applying the scale factor again as suggested, but still no change in the output:
|
i think it's |
I don't really use that much Flux so I had to do some tests. Actually the So right now it works like this: import torch
from diffusers import FluxPipeline
pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to("cuda")
pipe.load_lora_weights(
"TheLastBen/The_Hound", weight_name="sandor_clegane_single_layer.safetensors"
)
joint_attention_kwargs = {"scale": 1.0}
prompt = "sandor clegane drinking in a pub"
image = pipe(
prompt=prompt,
num_inference_steps=30,
width=1024,
generator=torch.Generator("cpu").manual_seed(42),
height=1024,
joint_attention_kwargs=joint_attention_kwargs,
).images[0] Doing this: pipe.load_lora_weights(
"TheLastBen/The_Hound", weight_name="sandor_clegane_single_layer.safetensors", adapter_nane="test_lora"
)
pipe.set_adapters("test_lora", 1.0) Disables the lora for some reason but I can't look into it right now. @sayakpaul for awareness |
yes, but as you've noted, i can not set the scale: https://github.com/bghira/discord-tron-client/blob/master/discord_tron_client/classes/image_manipulation/pipeline_runners/flux.py#L43 |
Interesting. Will take a look. We do check for the validity of Line 896 in 1c6ede9
I will look into it to try to make it more robust. |
I verified that including If anyone has any insight into how to upload multiple loras and independently modify their weights that would be useful, but otherwise my primary issue is fixed. |
Appreciate the investigation, @cshowley! I am looking into the other point now. Thanks for flagging! |
@asomoza this is what we had to do: pipe.load_lora_weights(
"TheLastBen/The_Hound", weight_name="sandor_clegane_single_layer.safetensors", adapter_nane="test_lora"
)
- pipe.set_adapters("test_lora", 1.0)
+ pipe.set_adapters("default_0", 1.0)
I will work on catching this as an error and improve the testing suite. |
oh, I had a typo: |
#9535 should solve it. Closing this issue. |
Describe the bug
According to loading loras for inference an argument
cross_attention_kwargs={"scale": 0.5}
can be added to apipeline()
call to vary the impact of a LORA on image generation. As theFluxPipeline
class doesn't support this argument I followed the guide here to embed the text prompt with a LORA scaling parameter. However the image remained unchanged with a fixed seed+prompt and a variablelora_scale
. I checked the embedding values for different values oflora_scale
and saw they did not change either. Does Flux indiffusers
not support LORA scaling or am I missing something?Reproduction
Logs
No response
System Info
Who can help?
No response
The text was updated successfully, but these errors were encountered: