Official implementation of K-LoRA: Unlocking Training-Free Fusion of Any Subject and Style LoRAs.
Below are the results of K-LoRA. The rows correspond to the respective style references, the columns correspond to the respective object references, and each cell represents the output obtained using a specific randomly selected seed.
In the supplementary materials of our paper, we propose another scale s* . If you wish to generate images with more style block information, we recommend choosing s*. If you prefer more texture details, s is the better option. You can select based on your preferences. (For Flux, we recommend using s*.) Below are reference images for different scales.
- super quick instruction for training local LoRAs
- K-LoRA for SDXL (inference)
- K-LoRA for FLUX (inference)
- k-LoRA for video models (inference)
git clone https://github.com/ouyangziheng/K-LoRA.git
cd K-LoRA
pip install -r requirements.txt
In this step, 2 LoRAs for subject/style images are trained based on SDXL. Using SDXL here is important because they found that the pre-trained SDXL exhibits strong learning when fine-tuned on only one reference style image.
Fortunately, diffusers already implemented LoRA based on SDXL here and you can simply follow the instruction.
For example, your training script would be like this.
export MODEL_NAME="stabilityai/stable-diffusion-xl-base-1.0"
# for subject
export OUTPUT_DIR="lora-sdxl-dog"
export INSTANCE_DIR="dog"
export PROMPT="a sbu dog"
export VALID_PROMPT="a sbu dog in a bucket"
# for style
# export OUTPUT_DIR="lora-sdxl-waterpainting"
# export INSTANCE_DIR="waterpainting"
# export PROMPT="a cat of in szn style"
# export VALID_PROMPT="a man in szn style"
accelerate launch train_dreambooth_lora_sdxl.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--instance_data_dir=$INSTANCE_DIR \
--output_dir=$OUTPUT_DIR \
--instance_prompt="${PROMPT}" \
--rank=8 \
--resolution=1024 \
--train_batch_size=1 \
--learning_rate=5e-5 \
--report_to="wandb" \
--lr_scheduler="constant" \
--lr_warmup_steps=0 \
--max_train_steps=1000 \
--validation_prompt="${VALID_PROMPT}" \
--validation_epochs=50 \
--seed="0" \
--mixed_precision="no" \
--enable_xformers_memory_efficient_attention \
--gradient_checkpointing \
--use_8bit_adam \
- You can find style images in aim-uofa/StyleDrop-PyTorch.
- You can find content images in google/dreambooth/tree/main/dataset.
You can directly use the script below for inference or interact by using the gradio.
export MODEL_NAME="stabilityai/stable-diffusion-xl-base-1.0"
export LORA_PATH_CONTENT="..."
export LORA_PATH_STYLE="..."
export OUTPUT_FOLDER="..."
export PROMPT="..."
python inference_sd.py \
--pretrained_model_name_or_path="$MODEL_NAME" \
--lora_name_or_path_content="$LORA_PATH_CONTENT" \
--lora_name_or_path_style="$LORA_PATH_STYLE" \
--output_folder="$OUTPUT_FOLDER" \
--prompt="$PROMPT"
# using gradio
# python inference_gradio.py \
# --pretrained_model_name_or_path="$MODEL_NAME" \
# --lora_name_or_path_content="$LORA_PATH_CONTENT" \
# --lora_name_or_path_style="$LORA_PATH_STYLE" \
# --output_folder="$OUTPUT_FOLDER" \
# --prompt="$PROMPT"
If you want to test the FLUX version of K-LoRA, you can directly run the inference_flux.py
script to perform inference using the community LoRA.
If you are using FLUX for testing, it is recommended to use a higher version of Flux. Please refer to FLUX for the dependency details.
If you wish to use the local FLUX LoRA, it is recommended to train it using the Dreambooth LoRA. For training instructions, you can refer to dreambooth_lora.
For local LoRA inference, you can directly add the following plug-and-play command when performing inference.
from utils import insert_community_flux_lora_to_unet
unet = insert_community_flux_lora_to_unet(
unet=pipe,
lora_weights_content_path=content_lora,
lora_weights_style_path=style_lora,
alpha=alpha,
beta=beta,
diffuse_step=flux_diffuse_step,
content_lora_weight_name=content_lora_weight_name,
style_lora_weight_name=style_lora_weight_name,
)
If you use this code, please cite the following paper:
@inproceedings{ouyang2025k,
title={K-LoRA: Unlocking Training-Free Fusion of Any Subject and Style LoRAs},
author={Ouyang, Ziheng and Li, Zhen and Hou, Qibin},
booktitle={CVPR},
year={2025}
}
If you have any questions or suggestions, please feel free to open an issue or contact the authors at zihengouyang666@gmail.com.
Licensed under a Creative Commons Attribution-NonCommercial 4.0 International for Non-commercial use only. Any commercial use should get formal permission first.