Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Azure :models parameter is ignored #571

Open
1 task done
astoff opened this issue Jan 16, 2025 · 4 comments
Open
1 task done

Azure :models parameter is ignored #571

astoff opened this issue Jan 16, 2025 · 4 comments
Labels
bug Something isn't working

Comments

@astoff
Copy link

astoff commented Jan 16, 2025

Please update gptel first -- errors are often fixed by the time they're reported.

  • I have updated gptel to the latest commit and tested that the issue still exists

Bug Description

It seems that the :models parameter of gptel-make-azure doesn't have any effect, and the model being called is determined by YOUR_DEPLOYMENT_NAME alone (cf. https://github.com/karthink/gptel?tab=readme-ov-file#azure).

Backend

None

Steps to Reproduce

Try

 (setq
  gptel-backend (gptel-make-azure "Azure-1"
                 :protocol "https"
                 :host "YOUR_RESOURCE_NAME.openai.azure.com"
                 :endpoint "/openai/deployments/YOUR_DEPLOYMENT_NAME/chat/completions?api-version=2023-05-15"
                 :stream t
                 :key #'gptel-api-key
                 :models '(gpt-4o gpt-999)))

The model Azure-1:gpt-999 works and gives the same answers as Azure-1:gpt-4o.

Additional Context

Emacs 30 on Linux

Backtrace

Log Information

@astoff astoff added the bug Something isn't working label Jan 16, 2025
@karthink
Copy link
Owner

I don't know how Azure should be configured, sorry. Is YOUR_DEPLOYMENT_NAME always predictable from the model name, across all Azure deployments? If it is then we can generate the endpoint dynamically from the active gptel-model.

@astoff
Copy link
Author

astoff commented Jan 17, 2025

I don't know how Azure should be configured, sorry.

I'm no expert either, I hope my comment helps anyway.

The YOUR_DEPLOYMENT_NAME is not necessarily the model name; rather it's an arbitrary "nickname" you create for a model. In my experience so far you completely determine an Azure model by providing a :host, a DEPLOYMENT_NAME and an api-version date.

The model property in the request body seems to not do anything (the API is OpenAI-compatible, however what the implications are for the model property is unclear to me...).

@karthink
Copy link
Owner

gptel makes the assumption that there is always an active model, so the backend has to specify a model, even if it's a dummy.

@astoff
Copy link
Author

astoff commented Jan 24, 2025

So I guess the conclusion is that the endpoint might depend on the model. Maybe you could make the endpoint slot accept a function taking the model and the backend as argument.

Or else, do something special for Azure: add an api-version slot and a hard-coded rule to compute the endpoint from that and the model/deployment name.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants