You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I don't know how Azure should be configured, sorry. Is YOUR_DEPLOYMENT_NAME always predictable from the model name, across all Azure deployments? If it is then we can generate the endpoint dynamically from the active gptel-model.
I don't know how Azure should be configured, sorry.
I'm no expert either, I hope my comment helps anyway.
The YOUR_DEPLOYMENT_NAME is not necessarily the model name; rather it's an arbitrary "nickname" you create for a model. In my experience so far you completely determine an Azure model by providing a :host, a DEPLOYMENT_NAME and an api-version date.
The model property in the request body seems to not do anything (the API is OpenAI-compatible, however what the implications are for the model property is unclear to me...).
So I guess the conclusion is that the endpoint might depend on the model. Maybe you could make the endpoint slot accept a function taking the model and the backend as argument.
Or else, do something special for Azure: add an api-version slot and a hard-coded rule to compute the endpoint from that and the model/deployment name.
Please update gptel first -- errors are often fixed by the time they're reported.
Bug Description
It seems that the
:models
parameter ofgptel-make-azure
doesn't have any effect, and the model being called is determined byYOUR_DEPLOYMENT_NAME
alone (cf. https://github.com/karthink/gptel?tab=readme-ov-file#azure).Backend
None
Steps to Reproduce
Try
The model
Azure-1:gpt-999
works and gives the same answers asAzure-1:gpt-4o
.Additional Context
Emacs 30 on Linux
Backtrace
Log Information
The text was updated successfully, but these errors were encountered: