Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Symbol's function definition is void: (setf gptel-fsm-info) #611

Open
1 task done
tommaisey opened this issue Feb 5, 2025 · 5 comments
Open
1 task done

Symbol's function definition is void: (setf gptel-fsm-info) #611

tommaisey opened this issue Feb 5, 2025 · 5 comments
Labels
bug Something isn't working

Comments

@tommaisey
Copy link

Please update gptel first -- errors are often fixed by the time they're reported.

  • I have updated gptel to the latest commit and tested that the issue still exists

Bug Description

After upgrading to gptel 20250204.2001 this morning I now can't send a request in a dedicated gptel mode buffer. I always get the following error in the status bar:

Symbol's function definition is void: \(setf\ gptel-fsm-info\)

Backend

OpenAI/Azure

Steps to Reproduce

  1. M-x gptel
  2. Write a prompt
  3. M-x gptel-send
  4. Error: Symbol's function definition is void: \(setf\ gptel-fsm-info\)

Additional Context

Emacs 29.1, macOS 13.6

Backtrace

Debugger entered--Lisp error: (void-function \(setf\ gptel-fsm-info\))
  \(setf\ gptel-fsm-info\)((:token "XXXXXXXXXXXXXXXXXX" :transformer #f(compiled-function (str) #<bytecode 0xaf074f56f05b204>) :callback gptel-curl--stream-insert-response :history (INIT) :data (:model "gpt-4o" :messages [(:role "system" :content "You are a large language model living in Emacs and...") (:role "user" :content "Here is where my query goes ...")] :stream t :temperature 1.0) :buffer #<buffer *ChatGPT*> :position #<marker at 111 in *ChatGPT*> :backend #s(gptel-openai :name "ChatGPT" :host "api.openai.com" :header #f(compiled-function () #<bytecode 0xf1304feed94ba1c>) :protocol "https" :stream t :endpoint "/v1/chat/completions" :key gptel-api-key :models (gpt-4o gpt-4o-mini gpt-4-turbo gpt-4 gpt-4-turbo-preview gpt-4-0125-preview o1 o1-preview o1-mini o3-mini gpt-4-32k gpt-4-1106-preview gpt-3.5-turbo gpt-3.5-turbo-16k) :url "https://api.openai.com/v1/chat/completions" :request-params nil :curl-args nil) :stream t) #s(gptel-fsm :state WAIT :table ((INIT (t . WAIT)) (WAIT (t . TYPE)) (TYPE (gptel--error-p . ERRS) (gptel--tool-use-p . TOOL) (t . DONE)) (TOOL (gptel--error-p . ERRS) (gptel--tool-result-p . WAIT) (t . DONE))) :handlers ((WAIT gptel--handle-wait) (TYPE gptel--handle-pre-insert) (ERRS gptel--handle-error gptel--fsm-last) (TOOL gptel--handle-tool-use) (DONE gptel--handle-post-insert gptel--fsm-last)) :info (:history (INIT) :data (:model "gpt-4o" :messages [(:role "system" :content "You are a large language model living in Emacs and...") (:role "user" :content "Here is where my query goes ...")] :stream t :temperature 1.0) :buffer #<buffer *ChatGPT*> :position #<marker at 111 in *ChatGPT*> :backend #s(gptel-openai :name "ChatGPT" :host "api.openai.com" :header #f(compiled-function () #<bytecode 0xf1304feed94ba1c>) :protocol "https" :stream t :endpoint "/v1/chat/completions" :key gptel-api-key :models (gpt-4o gpt-4o-mini gpt-4-turbo gpt-4 gpt-4-turbo-preview gpt-4-0125-preview o1 o1-preview o1-mini o3-mini gpt-4-32k gpt-4-1106-preview gpt-3.5-turbo gpt-3.5-turbo-16k) :url "https://api.openai.com/v1/chat/completions" :request-params nil :curl-args nil) :stream t)))
  gptel-curl-get-response(#s(gptel-fsm :state WAIT :table ((INIT (t . WAIT)) (WAIT (t . TYPE)) (TYPE (gptel--error-p . ERRS) (gptel--tool-use-p . TOOL) (t . DONE)) (TOOL (gptel--error-p . ERRS) (gptel--tool-result-p . WAIT) (t . DONE))) :handlers ((WAIT gptel--handle-wait) (TYPE gptel--handle-pre-insert) (ERRS gptel--handle-error gptel--fsm-last) (TOOL gptel--handle-tool-use) (DONE gptel--handle-post-insert gptel--fsm-last)) :info (:history (INIT) :data (:model "gpt-4o" :messages [(:role "system" :content "You are a large language model living in Emacs and...") (:role "user" :content "Here is my query...")] :stream t :temperature 1.0) :buffer #<buffer *ChatGPT*> :position #<marker at 111 in *ChatGPT*> :backend #s(gptel-openai :name "ChatGPT" :host "api.openai.com" :header #f(compiled-function () #<bytecode 0xf1304feed94ba1c>) :protocol "https" :stream t :endpoint "/v1/chat/completions" :key gptel-api-key :models (gpt-4o gpt-4o-mini gpt-4-turbo gpt-4 gpt-4-turbo-preview gpt-4-0125-preview o1 o1-preview o1-mini o3-mini gpt-4-32k gpt-4-1106-preview gpt-3.5-turbo gpt-3.5-turbo-16k) :url "https://api.openai.com/v1/chat/completions" :request-params nil :curl-args nil) :stream t)))
  gptel--handle-wait(#s(gptel-fsm :state WAIT :table ((INIT (t . WAIT)) (WAIT (t . TYPE)) (TYPE (gptel--error-p . ERRS) (gptel--tool-use-p . TOOL) (t . DONE)) (TOOL (gptel--error-p . ERRS) (gptel--tool-result-p . WAIT) (t . DONE))) :handlers ((WAIT gptel--handle-wait) (TYPE gptel--handle-pre-insert) (ERRS gptel--handle-error gptel--fsm-last) (TOOL gptel--handle-tool-use) (DONE gptel--handle-post-insert gptel--fsm-last)) :info (:history (INIT) :data (:model "gpt-4o" :messages [(:role "system" :content "You are a large language model living in Emacs and...") (:role "user" :content "Here is my query ...")] :stream t :temperature 1.0) :buffer #<buffer *ChatGPT*> :position #<marker at 111 in *ChatGPT*> :backend #s(gptel-openai :name "ChatGPT" :host "api.openai.com" :header #f(compiled-function () #<bytecode 0xf1304feed94ba1c>) :protocol "https" :stream t :endpoint "/v1/chat/completions" :key gptel-api-key :models (gpt-4o gpt-4o-mini gpt-4-turbo gpt-4 gpt-4-turbo-preview gpt-4-0125-preview o1 o1-preview o1-mini o3-mini gpt-4-32k gpt-4-1106-preview gpt-3.5-turbo gpt-3.5-turbo-16k) :url "https://api.openai.com/v1/chat/completions" :request-params nil :curl-args nil) :stream t)))
  #f(compiled-function (h) #<bytecode -0x13f105add5fd4cb2>)(gptel--handle-wait)
  mapc(#f(compiled-function (h) #<bytecode -0x13f105add5fd4cb2>) (gptel--handle-wait))
  gptel--fsm-transition(#s(gptel-fsm :state WAIT :table ((INIT (t . WAIT)) (WAIT (t . TYPE)) (TYPE (gptel--error-p . ERRS) (gptel--tool-use-p . TOOL) (t . DONE)) (TOOL (gptel--error-p . ERRS) (gptel--tool-result-p . WAIT) (t . DONE))) :handlers ((WAIT gptel--handle-wait) (TYPE gptel--handle-pre-insert) (ERRS gptel--handle-error gptel--fsm-last) (TOOL gptel--handle-tool-use) (DONE gptel--handle-post-insert gptel--fsm-last)) :info (:history (INIT) :data (:model "gpt-4o" :messages [(:role "system" :content "You are a large language model living in Emacs and...") (:role "user" :content "Here is my query ...")] :stream t :temperature 1.0) :buffer #<buffer *ChatGPT*> :position #<marker at 111 in *ChatGPT*> :backend #s(gptel-openai :name "ChatGPT" :host "api.openai.com" :header #f(compiled-function () #<bytecode 0xf1304feed94ba1c>) :protocol "https" :stream t :endpoint "/v1/chat/completions" :key gptel-api-key :models (gpt-4o gpt-4o-mini gpt-4-turbo gpt-4 gpt-4-turbo-preview gpt-4-0125-preview o1 o1-preview o1-mini o3-mini gpt-4-32k gpt-4-1106-preview gpt-3.5-turbo gpt-3.5-turbo-16k) :url "https://api.openai.com/v1/chat/completions" :request-params nil :curl-args nil) :stream t)))
  gptel-request(nil :stream t :fsm #s(gptel-fsm :state WAIT :table ((INIT (t . WAIT)) (WAIT (t . TYPE)) (TYPE (gptel--error-p . ERRS) (gptel--tool-use-p . TOOL) (t . DONE)) (TOOL (gptel--error-p . ERRS) (gptel--tool-result-p . WAIT) (t . DONE))) :handlers ((WAIT gptel--handle-wait) (TYPE gptel--handle-pre-insert) (ERRS gptel--handle-error gptel--fsm-last) (TOOL gptel--handle-tool-use) (DONE gptel--handle-post-insert gptel--fsm-last)) :info (:history (INIT) :data (:model "gpt-4o" :messages [(:role "system" :content "You are a large language model living in Emacs and...") (:role "user" :content "Here is my query ...")] :stream t :temperature 1.0) :buffer #<buffer *ChatGPT*> :position #<marker at 111 in *ChatGPT*> :backend #s(gptel-openai :name "ChatGPT" :host "api.openai.com" :header #f(compiled-function () #<bytecode 0xf1304feed94ba1c>) :protocol "https" :stream t :endpoint "/v1/chat/completions" :key gptel-api-key :models (gpt-4o gpt-4o-mini gpt-4-turbo gpt-4 gpt-4-turbo-preview gpt-4-0125-preview o1 o1-preview o1-mini o3-mini gpt-4-32k gpt-4-1106-preview gpt-3.5-turbo gpt-3.5-turbo-16k) :url "https://api.openai.com/v1/chat/completions" :request-params nil :curl-args nil) :stream t)))
  #f(compiled-function (&optional arg) "Submit this prompt to the current LLM backend.\n\nBy default, the contents of the buffer up to the cursor position\nare sent.  If the region is active, its contents are sent\ninstead.\n\nThe response from the LLM is inserted below the cursor position\nat the time of sending.  To change this behavior or model\nparameters, use prefix arg ARG activate a transient menu with\nmore options instead.\n\nThis command is asynchronous, you can continue to use Emacs while\nwaiting for the response." (interactive "P") #<bytecode 0x1568b520004e2b65>)(nil)
  apply(#f(compiled-function (&optional arg) "Submit this prompt to the current LLM backend.\n\nBy default, the contents of the buffer up to the cursor position\nare sent.  If the region is active, its contents are sent\ninstead.\n\nThe response from the LLM is inserted below the cursor position\nat the time of sending.  To change this behavior or model\nparameters, use prefix arg ARG activate a transient menu with\nmore options instead.\n\nThis command is asynchronous, you can continue to use Emacs while\nwaiting for the response." (interactive "P") #<bytecode 0x1568b520004e2b65>) nil)
  gptel-org--send-with-props(#f(compiled-function (&optional arg) "Submit this prompt to the current LLM backend.\n\nBy default, the contents of the buffer up to the cursor position\nare sent.  If the region is active, its contents are sent\ninstead.\n\nThe response from the LLM is inserted below the cursor position\nat the time of sending.  To change this behavior or model\nparameters, use prefix arg ARG activate a transient menu with\nmore options instead.\n\nThis command is asynchronous, you can continue to use Emacs while\nwaiting for the response." (interactive "P") #<bytecode 0x1568b520004e2b65>) nil)
  apply(gptel-org--send-with-props #f(compiled-function (&optional arg) "Submit this prompt to the current LLM backend.\n\nBy default, the contents of the buffer up to the cursor position\nare sent.  If the region is active, its contents are sent\ninstead.\n\nThe response from the LLM is inserted below the cursor position\nat the time of sending.  To change this behavior or model\nparameters, use prefix arg ARG activate a transient menu with\nmore options instead.\n\nThis command is asynchronous, you can continue to use Emacs while\nwaiting for the response." (interactive "P") #<bytecode 0x1568b520004e2b65>) nil)
  gptel-send(nil)
  funcall-interactively(gptel-send nil)
  command-execute(gptel-send)

Log Information

@tommaisey tommaisey added the bug Something isn't working label Feb 5, 2025
@karthink
Copy link
Owner

karthink commented Feb 5, 2025

gptel didn't install or byte-compile correctly. How did you install gptel?

@e665107

This comment has been minimized.

@karthink

This comment has been minimized.

@e665107

This comment has been minimized.

@karthink
Copy link
Owner

karthink commented Mar 9, 2025

Are you still experiencing this issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants