You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Please update gptel first -- errors are often fixed by the time they're reported.
I have updated gptel to the latest commit and tested that the issue still exists
Bug Description
After upgrading to gptel 20250204.2001 this morning I now can't send a request in a dedicated gptel mode buffer. I always get the following error in the status bar:
Symbol's function definition is void: \(setf\ gptel-fsm-info\)
Backend
OpenAI/Azure
Steps to Reproduce
M-x gptel
Write a prompt
M-x gptel-send
Error: Symbol's function definition is void: \(setf\ gptel-fsm-info\)
Additional Context
Emacs 29.1, macOS 13.6
Backtrace
Debugger entered--Lisp error: (void-function\(setf\ gptel-fsm-info\))
\(setf\ gptel-fsm-info\)((:token "XXXXXXXXXXXXXXXXXX" :transformer #f(compiled-function (str) #<bytecode 0xaf074f56f05b204>) :callback gptel-curl--stream-insert-response :history (INIT) :data (:model "gpt-4o" :messages [(:role "system" :content "You are a large language model living in Emacs and...") (:role "user" :content "Here is where my query goes ...")] :stream t :temperature 1.0) :buffer #<buffer *ChatGPT*> :position #<marker at 111 in *ChatGPT*> :backend #s(gptel-openai :name "ChatGPT" :host "api.openai.com" :header #f(compiled-function () #<bytecode 0xf1304feed94ba1c>) :protocol "https" :stream t :endpoint "/v1/chat/completions" :key gptel-api-key :models (gpt-4o gpt-4o-mini gpt-4-turbo gpt-4 gpt-4-turbo-preview gpt-4-0125-preview o1 o1-preview o1-mini o3-mini gpt-4-32k gpt-4-1106-preview gpt-3.5-turbo gpt-3.5-turbo-16k) :url "https://api.openai.com/v1/chat/completions" :request-params nil :curl-args nil) :stream t) #s(gptel-fsm :state WAIT :table ((INIT (t . WAIT)) (WAIT (t . TYPE)) (TYPE (gptel--error-p . ERRS) (gptel--tool-use-p . TOOL) (t . DONE)) (TOOL (gptel--error-p . ERRS) (gptel--tool-result-p . WAIT) (t . DONE))) :handlers ((WAIT gptel--handle-wait) (TYPE gptel--handle-pre-insert) (ERRS gptel--handle-error gptel--fsm-last) (TOOL gptel--handle-tool-use) (DONE gptel--handle-post-insert gptel--fsm-last)) :info (:history (INIT) :data (:model "gpt-4o" :messages [(:role "system" :content "You are a large language model living in Emacs and...") (:role "user" :content "Here is where my query goes ...")] :stream t :temperature 1.0) :buffer #<buffer *ChatGPT*> :position #<marker at 111 in *ChatGPT*> :backend #s(gptel-openai :name "ChatGPT" :host "api.openai.com" :header #f(compiled-function () #<bytecode 0xf1304feed94ba1c>) :protocol "https" :stream t :endpoint "/v1/chat/completions" :key gptel-api-key :models (gpt-4o gpt-4o-mini gpt-4-turbo gpt-4 gpt-4-turbo-preview gpt-4-0125-preview o1 o1-preview o1-mini o3-mini gpt-4-32k gpt-4-1106-preview gpt-3.5-turbo gpt-3.5-turbo-16k) :url "https://api.openai.com/v1/chat/completions" :request-params nil :curl-args nil) :stream t)))
gptel-curl-get-response(#s(gptel-fsm :state WAIT :table ((INIT (t . WAIT)) (WAIT (t . TYPE)) (TYPE (gptel--error-p . ERRS) (gptel--tool-use-p . TOOL) (t . DONE)) (TOOL (gptel--error-p . ERRS) (gptel--tool-result-p . WAIT) (t . DONE))) :handlers ((WAIT gptel--handle-wait) (TYPE gptel--handle-pre-insert) (ERRS gptel--handle-error gptel--fsm-last) (TOOL gptel--handle-tool-use) (DONE gptel--handle-post-insert gptel--fsm-last)) :info (:history (INIT) :data (:model "gpt-4o" :messages [(:role "system" :content "You are a large language model living in Emacs and...") (:role "user" :content "Here is my query...")] :stream t :temperature 1.0) :buffer #<buffer *ChatGPT*> :position #<marker at 111 in *ChatGPT*> :backend #s(gptel-openai :name "ChatGPT" :host "api.openai.com" :header #f(compiled-function () #<bytecode 0xf1304feed94ba1c>) :protocol "https" :stream t :endpoint "/v1/chat/completions" :key gptel-api-key :models (gpt-4o gpt-4o-mini gpt-4-turbo gpt-4 gpt-4-turbo-preview gpt-4-0125-preview o1 o1-preview o1-mini o3-mini gpt-4-32k gpt-4-1106-preview gpt-3.5-turbo gpt-3.5-turbo-16k) :url "https://api.openai.com/v1/chat/completions" :request-params nil :curl-args nil) :stream t)))
gptel--handle-wait(#s(gptel-fsm :state WAIT :table ((INIT (t . WAIT)) (WAIT (t . TYPE)) (TYPE (gptel--error-p . ERRS) (gptel--tool-use-p . TOOL) (t . DONE)) (TOOL (gptel--error-p . ERRS) (gptel--tool-result-p . WAIT) (t . DONE))) :handlers ((WAIT gptel--handle-wait) (TYPE gptel--handle-pre-insert) (ERRS gptel--handle-error gptel--fsm-last) (TOOL gptel--handle-tool-use) (DONE gptel--handle-post-insert gptel--fsm-last)) :info (:history (INIT) :data (:model "gpt-4o" :messages [(:role "system" :content "You are a large language model living in Emacs and...") (:role "user" :content "Here is my query ...")] :stream t :temperature 1.0) :buffer #<buffer *ChatGPT*> :position #<marker at 111 in *ChatGPT*> :backend #s(gptel-openai :name "ChatGPT" :host "api.openai.com" :header #f(compiled-function () #<bytecode 0xf1304feed94ba1c>) :protocol "https" :stream t :endpoint "/v1/chat/completions" :key gptel-api-key :models (gpt-4o gpt-4o-mini gpt-4-turbo gpt-4 gpt-4-turbo-preview gpt-4-0125-preview o1 o1-preview o1-mini o3-mini gpt-4-32k gpt-4-1106-preview gpt-3.5-turbo gpt-3.5-turbo-16k) :url "https://api.openai.com/v1/chat/completions" :request-params nil :curl-args nil) :stream t)))
#f(compiled-function (h) #<bytecode -0x13f105add5fd4cb2>)(gptel--handle-wait)
mapc(#f(compiled-function (h) #<bytecode -0x13f105add5fd4cb2>) (gptel--handle-wait))
gptel--fsm-transition(#s(gptel-fsm :state WAIT :table ((INIT (t . WAIT)) (WAIT (t . TYPE)) (TYPE (gptel--error-p . ERRS) (gptel--tool-use-p . TOOL) (t . DONE)) (TOOL (gptel--error-p . ERRS) (gptel--tool-result-p . WAIT) (t . DONE))) :handlers ((WAIT gptel--handle-wait) (TYPE gptel--handle-pre-insert) (ERRS gptel--handle-error gptel--fsm-last) (TOOL gptel--handle-tool-use) (DONE gptel--handle-post-insert gptel--fsm-last)) :info (:history (INIT) :data (:model "gpt-4o" :messages [(:role "system" :content "You are a large language model living in Emacs and...") (:role "user" :content "Here is my query ...")] :stream t :temperature 1.0) :buffer #<buffer *ChatGPT*> :position #<marker at 111 in *ChatGPT*> :backend #s(gptel-openai :name "ChatGPT" :host "api.openai.com" :header #f(compiled-function () #<bytecode 0xf1304feed94ba1c>) :protocol "https" :stream t :endpoint "/v1/chat/completions" :key gptel-api-key :models (gpt-4o gpt-4o-mini gpt-4-turbo gpt-4 gpt-4-turbo-preview gpt-4-0125-preview o1 o1-preview o1-mini o3-mini gpt-4-32k gpt-4-1106-preview gpt-3.5-turbo gpt-3.5-turbo-16k) :url "https://api.openai.com/v1/chat/completions" :request-params nil :curl-args nil) :stream t)))
gptel-request(nil :stream t :fsm #s(gptel-fsm :state WAIT :table ((INIT (t . WAIT)) (WAIT (t . TYPE)) (TYPE (gptel--error-p . ERRS) (gptel--tool-use-p . TOOL) (t . DONE)) (TOOL (gptel--error-p . ERRS) (gptel--tool-result-p . WAIT) (t . DONE))) :handlers ((WAIT gptel--handle-wait) (TYPE gptel--handle-pre-insert) (ERRS gptel--handle-error gptel--fsm-last) (TOOL gptel--handle-tool-use) (DONE gptel--handle-post-insert gptel--fsm-last)) :info (:history (INIT) :data (:model "gpt-4o" :messages [(:role "system" :content "You are a large language model living in Emacs and...") (:role "user" :content "Here is my query ...")] :stream t :temperature 1.0) :buffer #<buffer *ChatGPT*> :position #<marker at 111 in *ChatGPT*> :backend #s(gptel-openai :name "ChatGPT" :host "api.openai.com" :header #f(compiled-function () #<bytecode 0xf1304feed94ba1c>) :protocol "https" :stream t :endpoint "/v1/chat/completions" :key gptel-api-key :models (gpt-4o gpt-4o-mini gpt-4-turbo gpt-4 gpt-4-turbo-preview gpt-4-0125-preview o1 o1-preview o1-mini o3-mini gpt-4-32k gpt-4-1106-preview gpt-3.5-turbo gpt-3.5-turbo-16k) :url "https://api.openai.com/v1/chat/completions" :request-params nil :curl-args nil) :stream t)))
#f(compiled-function (&optional arg) "Submit this prompt to the current LLM backend.\n\nBy default, the contents of the buffer up to the cursor position\nare sent. If the region is active, its contents are sent\ninstead.\n\nThe response from the LLM is inserted below the cursor position\nat the time of sending. To change this behavior or model\nparameters, use prefix arg ARG activate a transient menu with\nmore options instead.\n\nThis command is asynchronous, you can continue to use Emacs while\nwaiting for the response." (interactive"P") #<bytecode 0x1568b520004e2b65>)(nil)
apply(#f(compiled-function (&optional arg) "Submit this prompt to the current LLM backend.\n\nBy default, the contents of the buffer up to the cursor position\nare sent. If the region is active, its contents are sent\ninstead.\n\nThe response from the LLM is inserted below the cursor position\nat the time of sending. To change this behavior or model\nparameters, use prefix arg ARG activate a transient menu with\nmore options instead.\n\nThis command is asynchronous, you can continue to use Emacs while\nwaiting for the response." (interactive"P") #<bytecode 0x1568b520004e2b65>) nil)
gptel-org--send-with-props(#f(compiled-function (&optional arg) "Submit this prompt to the current LLM backend.\n\nBy default, the contents of the buffer up to the cursor position\nare sent. If the region is active, its contents are sent\ninstead.\n\nThe response from the LLM is inserted below the cursor position\nat the time of sending. To change this behavior or model\nparameters, use prefix arg ARG activate a transient menu with\nmore options instead.\n\nThis command is asynchronous, you can continue to use Emacs while\nwaiting for the response." (interactive"P") #<bytecode 0x1568b520004e2b65>) nil)
apply(gptel-org--send-with-props #f(compiled-function (&optional arg) "Submit this prompt to the current LLM backend.\n\nBy default, the contents of the buffer up to the cursor position\nare sent. If the region is active, its contents are sent\ninstead.\n\nThe response from the LLM is inserted below the cursor position\nat the time of sending. To change this behavior or model\nparameters, use prefix arg ARG activate a transient menu with\nmore options instead.\n\nThis command is asynchronous, you can continue to use Emacs while\nwaiting for the response." (interactive"P") #<bytecode 0x1568b520004e2b65>) nil)
gptel-send(nil)
funcall-interactively(gptel-send nil)
command-execute(gptel-send)
Log Information
The text was updated successfully, but these errors were encountered:
Please update gptel first -- errors are often fixed by the time they're reported.
Bug Description
After upgrading to gptel 20250204.2001 this morning I now can't send a request in a dedicated gptel mode buffer. I always get the following error in the status bar:
Backend
OpenAI/Azure
Steps to Reproduce
M-x gptel
M-x gptel-send
Error: Symbol's function definition is void: \(setf\ gptel-fsm-info\)
Additional Context
Emacs 29.1, macOS 13.6
Backtrace
Log Information
The text was updated successfully, but these errors were encountered: