Skip to content

Vision / Image support #118

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
grigio opened this issue May 6, 2025 · 1 comment
Closed

Vision / Image support #118

grigio opened this issue May 6, 2025 · 1 comment
Labels
support support requests

Comments

@grigio
Copy link

grigio commented May 6, 2025

it worked on OpenWebUI+ollama but not in OpenWebUI+llama-swap+llamacpp

500: Failed to parse messages: Unsupported content part type: "image_url";

ggml-org/llama.cpp#12348

@mostlygeek
Copy link
Owner

When llama-server supports vision/multi-modal models llama-swap will be support it as well. For now you can use an another inference server with vision support. I use vllm and it works through llama-swap.

@mostlygeek mostlygeek added the support support requests label May 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
support support requests
Projects
None yet
Development

No branches or pull requests

2 participants