-
Notifications
You must be signed in to change notification settings - Fork 5.9k
HiDream LoRA support #11383
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hey @vladmandic, we haven't yet added support for HiDream LoRAs that are not in |
which is fine, but why does |
yeah it should throw an error if keys are incompatible, could you please provide code snippet for the LoRA you're trying to load that doesn't load and doesn't error either? |
@linoytsaban almost every lora from the link i provided. pipe.load_lora_weight(filename, adapter_name="test") # no error
print(pipe.get_list_adapters()) # empty list
pipe.set_adapters(adapter_names=["test"], adapter_weights=[1.0]) # fails as lora is not loaded |
What model is it? Full or Dev? Could you please provide two things?
I did the following and it logged: from diffusers import DiffusionPipeline
from transformers import PreTrainedTokenizerFast, LlamaForCausalLM
import torch
repo_id = "HiDream-ai/HiDream-I1-Full"
tokenizer_4 = PreTrainedTokenizerFast.from_pretrained("meta-llama/Meta-Llama-3.1-8B-Instruct")
text_encoder_4 = LlamaForCausalLM.from_pretrained(
"meta-llama/Meta-Llama-3.1-8B-Instruct",
output_hidden_states=True,
output_attentions=True,
torch_dtype=torch.bfloat16,
)
pipeline = DiffusionPipeline.from_pretrained(
repo_id,
tokenizer_4=tokenizer_4,
text_encoder_4=text_encoder_4,
torch_dtype=torch.bfloat16
).to("cuda")
pipeline.load_lora_weights("sayakpaul/different-lora-from-civitai", weight_name="486_hidream.safetensors") Log: No LoRA keys associated to HiDreamImageTransformer2DModel found with the prefix='transformer'. This is safe to ignore if LoRA state dict didn't originally have any HiDreamImageTransformer2DModel related params. You can also try specifying `prefix=None` to resolve the warning. Otherwise, open an issue if you think it's unexpected: https://github.com/huggingface/diffusers/issues/new |
i used hidream-i1-full. your snippet is fine. |
Not to digress, https://huggingface.co/spaces/sayakpaul/civitai-to-hub could help.
It's diffusers/src/diffusers/loaders/peft.py Line 418 in b4be422
|
Hi, I am facing the same issue. Here's a minimal reproducible example: import torch
from diffusers import (
AutoencoderKL,
FlowMatchEulerDiscreteScheduler,
HiDreamImagePipeline,
HiDreamImageTransformer2DModel,
)
from transformers import PreTrainedTokenizerFast, LlamaForCausalLM
llama_repo = "meta-llama/Meta-Llama-3.1-8B-Instruct"
model_id = "HiDream-ai/HiDream-I1-Full"
tokenizer_4 = PreTrainedTokenizerFast.from_pretrained(
llama_repo,
)
text_encoder_4 = LlamaForCausalLM.from_pretrained(
llama_repo,
output_hidden_states=True,
output_attentions=True,
torch_dtype=torch.bfloat16,
)
transformer = HiDreamImageTransformer2DModel.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
subfolder="transformer",
)
pipeline = HiDreamImagePipeline.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
tokenizer_4=tokenizer_4,
text_encoder_4=text_encoder_4,
transformer=transformer,
)
pipeline.load_lora_weights(
"sayakpaul/different-lora-from-civitai",
weight_name="486_hidream.safetensors",
)
print(pipeline.get_list_adapters()) This fails for me with the following error:
Additionally, when I load a LoRA that I trained myself (using
Would appreciate any help on this! |
@mukundkhanna123 that is because the LoRA isn't supported yet but will be supported soon. Additionally, if you use the LoRA mentioned in the snippet, it shouldn't error out. Instead, it should lead to the warning mentioned in #11383 (comment). Did you make any changes to the
Can you please open a separate issue for that? Cc: @linoytsaban |
Okay I used the latest commit bd96a08 and it didn't error out with the lora mentioned in the snippet and just gave the warning. I am able to load the LoRA I trained as well ( trained with diffusers). However there is no change in the output image. |
Could you please create a new issue for that and tag me and @linoytsaban? Linoy has been training a bunch of LoRAs with our script and the resultant LoRAs work. Here is an example: https://huggingface.co/linoyts/hidream-3dicon-lora |
Hi, thank you for taking the time to reply. I updated the LoRA script to do diffusion training on Hidream using a LoRA. I am seeing that my grad norms for all lora_A weights are 0. That is why the weights are not getting updated. I have tried debugging but I am not able to solve this. The LoRA parameters all have required_grad=True and are passed to the optimizer. If you guys could help with this then that would be really helpful!. Thank you |
Hey @mukundkhanna123, can you share the configuration you used for training? |
I've tried multiple configurations all with similar results, used a learning rate in the range of 1e-4 to 1e-5, tried setting weight decay to 0 and using the default value, using a batch size of 96 with gradient accumulation and no warmup with AdamW optimizer, the beta for dpo is set to 2000 and I'm using a rank of 64. The base model is HiDream-I1-Full |
I've trained on multiple concepts and didn't experience what you're describing so it's hard to pinpoint the issue.
trained weights are here |
@linoytsaban I am doing LoRA training along with the loss function of DPO. Attaching a snippet of my training loop
|
Then we're digressing. We cannot really control the code we haven't written and will request you to open a "Discussion" instead. It would be helpful for everyone if you respected that. |
@vladmandic sorry for the spam here on this issue. I am at ICLR, hence there's no update on this issue yet. I will get to it soon and update. |
Describe the bug
i've tried loras currently available on civitai and most appear to load without any errors using
load_lora_weights
.but later, they do not appear in
get_list_adapters
orget_active_adapters
and do not seem to be applied to the model.if loras were not loaded, i'd expect to see some error?
some return error
Invalid LoRA checkpoint
, but most do not return any error.list of currently available loras on civitai: https://civitai.com/search/models?baseModel=HiDream&modelType=LORA&sortBy=models_v9
Reproduction
N/A
Logs
System Info
diffusers==e30d3bf5442fbdbee899e8a5da0b11b621d54f1b
Who can help?
@linoytsaban @sayakpaul
The text was updated successfully, but these errors were encountered: