Skip to content

[Docs Update] AutoPipelineForInpainting.from_pretrained fails to load runwayml/stable-diffusion-inpainting without variant="fp16" #11528

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
Player256 opened this issue May 9, 2025 · 0 comments
Labels
bug Something isn't working

Comments

@Player256
Copy link

Player256 commented May 9, 2025

Describe the bug

I was following this part of the docs, the code above configure pipeline parameters. I found a little mistake which lead to the following error log. Adding variant="fp16" resolves the issue.

Reproduction

import PIL
import numpy as np
import torch

from diffusers import AutoPipelineForInpainting
from diffusers.utils import load_image, make_image_grid

device = "cuda"
pipeline = AutoPipelineForInpainting.from_pretrained(
    "runwayml/stable-diffusion-inpainting",
    torch_dtype=torch.float16,
)
pipeline = pipeline.to(device)

img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"

init_image = load_image(img_url).resize((512, 512))
mask_image = load_image(mask_url).resize((512, 512))

prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
repainted_image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image).images[0]
repainted_image.save("repainted_image.png")

unmasked_unchanged_image = pipeline.image_processor.apply_overlay(mask_image, init_image, repainted_image)
unmasked_unchanged_image.save("force_unmasked_unchanged.png")
make_image_grid([init_image, mask_image, repainted_image, unmasked_unchanged_image], rows=2, cols=2)

Logs

---------------------------------------------------------------------------
OSError                                   Traceback (most recent call last)
Cell In[8], line 9
      6 from diffusers.utils import load_image, make_image_grid
      8 device = "cuda"
----> 9 pipeline = AutoPipelineForInpainting.from_pretrained(
     10     "runwayml/stable-diffusion-inpainting",
     11     torch_dtype=torch.float16,
     12 )
     13 pipeline = pipeline.to(device)
     15 img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"

File /opt/conda/envs/idm/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:114, in validate_hf_hub_args.<locals>._inner_fn(*args, **kwargs)
    111 if check_use_auth_token:
    112     kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs)
--> 114 return fn(*args, **kwargs)

File /opt/conda/envs/idm/lib/python3.10/site-packages/diffusers/pipelines/auto_pipeline.py:1058, in AutoPipelineForInpainting.from_pretrained(cls, pretrained_model_or_path, **kwargs)
   1055 inpainting_cls = _get_task_class(AUTO_INPAINT_PIPELINES_MAPPING, orig_class_name)
   1057 kwargs = {**load_config_kwargs, **kwargs}
-> 1058 return inpainting_cls.from_pretrained(pretrained_model_or_path, **kwargs)

File /opt/conda/envs/idm/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:114, in validate_hf_hub_args.<locals>._inner_fn(*args, **kwargs)
    111 if check_use_auth_token:
    112     kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs)
--> 114 return fn(*args, **kwargs)

File /opt/conda/envs/idm/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py:981, in DiffusionPipeline.from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
    974 else:
    975     # load sub model
    976     sub_model_dtype = (
    977         torch_dtype.get(name, torch_dtype.get("default", torch.float32))
    978         if isinstance(torch_dtype, dict)
    979         else torch_dtype
    980     )
--> 981     loaded_sub_model = load_sub_model(
    982         library_name=library_name,
    983         class_name=class_name,
    984         importable_classes=importable_classes,
    985         pipelines=pipelines,
    986         is_pipeline_module=is_pipeline_module,
    987         pipeline_class=pipeline_class,
    988         torch_dtype=sub_model_dtype,
    989         provider=provider,
    990         sess_options=sess_options,
    991         device_map=current_device_map,
    992         max_memory=max_memory,
    993         offload_folder=offload_folder,
    994         offload_state_dict=offload_state_dict,
    995         model_variants=model_variants,
    996         name=name,
    997         from_flax=from_flax,
    998         variant=variant,
    999         low_cpu_mem_usage=low_cpu_mem_usage,
   1000         cached_folder=cached_folder,
   1001         use_safetensors=use_safetensors,
   1002         dduf_entries=dduf_entries,
   1003         provider_options=provider_options,
   1004     )
   1005     logger.info(
   1006         f"Loaded {name} as {class_name} from `{name}` subfolder of {pretrained_model_name_or_path}."
   1007     )
   1009 init_kwargs[name] = loaded_sub_model  # UNet(...), # DiffusionSchedule(...)

File /opt/conda/envs/idm/lib/python3.10/site-packages/diffusers/pipelines/pipeline_loading_utils.py:777, in load_sub_model(library_name, class_name, importable_classes, pipelines, is_pipeline_module, pipeline_class, torch_dtype, provider, sess_options, device_map, max_memory, offload_folder, offload_state_dict, model_variants, name, from_flax, variant, low_cpu_mem_usage, cached_folder, use_safetensors, dduf_entries, provider_options)
    775     loaded_sub_model = load_method(name, **loading_kwargs)
    776 elif os.path.isdir(os.path.join(cached_folder, name)):
--> 777     loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)
    778 else:
    779     # else load from the root directory
    780     loaded_sub_model = load_method(cached_folder, **loading_kwargs)

File /opt/conda/envs/idm/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:114, in validate_hf_hub_args.<locals>._inner_fn(*args, **kwargs)
    111 if check_use_auth_token:
    112     kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs)
--> 114 return fn(*args, **kwargs)

File /opt/conda/envs/idm/lib/python3.10/site-packages/diffusers/models/modeling_utils.py:1147, in ModelMixin.from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
   1142             logger.warning(
   1143                 "Defaulting to unsafe serialization. Pass `allow_pickle=False` to raise an error instead."
   1144             )
   1146     if resolved_model_file is None and not is_sharded:
-> 1147         resolved_model_file = _get_model_file(
   1148             pretrained_model_name_or_path,
   1149             weights_name=_add_variant(WEIGHTS_NAME, variant),
   1150             cache_dir=cache_dir,
   1151             force_download=force_download,
   1152             proxies=proxies,
   1153             local_files_only=local_files_only,
   1154             token=token,
   1155             revision=revision,
   1156             subfolder=subfolder,
   1157             user_agent=user_agent,
   1158             commit_hash=commit_hash,
   1159             dduf_entries=dduf_entries,
   1160         )
   1162 if not isinstance(resolved_model_file, list):
   1163     resolved_model_file = [resolved_model_file]

File /opt/conda/envs/idm/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:114, in validate_hf_hub_args.<locals>._inner_fn(*args, **kwargs)
    111 if check_use_auth_token:
    112     kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs)
--> 114 return fn(*args, **kwargs)

File /opt/conda/envs/idm/lib/python3.10/site-packages/diffusers/utils/hub_utils.py:254, in _get_model_file(pretrained_model_name_or_path, weights_name, subfolder, cache_dir, force_download, proxies, local_files_only, token, user_agent, revision, commit_hash, dduf_entries)
    252         return model_file
    253     else:
--> 254         raise EnvironmentError(
    255             f"Error no file named {weights_name} found in directory {pretrained_model_name_or_path}."
    256         )
    257 else:
    258     # 1. First check if deprecated way of loading from branches is used
    259     if (
    260         revision in DEPRECATED_REVISION_ARGS
    261         and (weights_name == WEIGHTS_NAME or weights_name == SAFETENSORS_WEIGHTS_NAME)
    262         and version.parse(version.parse(__version__).base_version) >= version.parse("0.22.0")
    263     ):

OSError: Error no file named diffusion_pytorch_model.bin found in directory /home/ubuntu/.cache/huggingface/hub/models--runwayml--stable-diffusion-inpainting/snapshots/8a4288a76071f7280aedbdb3253bdb9e9d5d84bb/unet.

System Info

diffusers==0.34.0.dev0

Who can help?

No response

@Player256 Player256 added the bug Something isn't working label May 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant