Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Undefine default ChatGPT backend #649

Open
acceleratesage opened this issue Feb 17, 2025 · 9 comments
Open

Undefine default ChatGPT backend #649

acceleratesage opened this issue Feb 17, 2025 · 9 comments
Labels
question Further information is requested

Comments

@acceleratesage
Copy link

Hi, is there a way to undefine the default backend and models? I'd like to only work with a selected list of backends.

@acceleratesage acceleratesage added the question Further information is requested label Feb 17, 2025
@karthink
Copy link
Owner

(setf (gptel-get-backend "ChatGPT") nil)

@acceleratesage
Copy link
Author

When using this in :config in use-package, it fails with:

Error (use-package): gptel/:config: Symbol's function definition is void: \(setf\ gptel-get-backend\)

When putting it outside of use-package, it works.

@karthink
Copy link
Owner

karthink commented Feb 19, 2025 via email

@acceleratesage
Copy link
Author

(use-package gptel
  :config

  (gptel-make-openai "Groq"
    :host "api.groq.com"
    :endpoint "/openai/v1/chat/completions"
    :key gptel-api-key
    :stream t
    :models '(deepseek-r1-distill-llama-70b llama-3.3-70b-versatile))

  (gptel-make-openai "xAI"
    :host "api.x.ai"
    :key gptel-api-key
    :endpoint "/v1/chat/completions"
    :stream t
    :models '(grok-2-1212 grok-2-vision-1212))

  (setq gptel-default-mode 'org-mode
	gptel-backend (gptel-get-backend "Groq")
	gptel-model 'deepseek-r1-distill-llama-70b))

;; Only works outside of use-package
(setf (gptel-get-backend "ChatGPT") nil)

@benma
Copy link

benma commented Mar 4, 2025

This ideally goes into the README, it's not trivial to figure out this solution to undefine a backend.

@rudolf-adamkovic
Copy link

+1

My model menu is spammed with proprietary ChatGPT offerings as well, despite the fact that I configured GPTel to use Ollama:

(with-eval-after-load 'gptel
    (let ((models '(llama3.1:8b
                    phi4:14b
                    deepseek-r1:14b)))
      (setq gptel-model (car models)
            gptel-backend (gptel-make-ollama "Ollama"
                            :host "localhost:11434"
                            :stream t
                            :models models))))

Expected:

  • Ollama:phi4:14b
  • Ollama:llama3.1:8b
  • Ollama:deepseek-r1:14b

Actual:

  • ChatGPT:gpt-3.5-turbo More expensive & less capable than GPT-4o-mini; use that instead (tool-use) 16k $ 0.50 in, $ 1.50 out 2021-09
  • ChatGPT:gpt-3.5-turbo-16k More expensive & less capable than GPT-4o-mini; use that instead (tool-use) 16k $ 3.00 in, $ 4.00 out 2021-09
  • ChatGPT:gpt-4 GPT-4 snapshot from June 2023 with improved function calling support (media tool-use url) 8k $30.00 in, $ 60.00 out 2023-09
  • ChatGPT:gpt-4-0125-preview GPT-4 Turbo preview model intended to reduce cases of “laziness” (media tool-use url) 128k $10.00 in, $ 30.00 out 2023-12
  • ChatGPT:gpt-4-1106-preview Preview model with improved function calling support (tool-use) 128k $10.00 in, $ 30.00 out 2023-04
  • ChatGPT:gpt-4-32k (tool-use) $60.00 in, $120.00 out
  • ChatGPT:gpt-4-turbo Previous high-intelligence model (media tool-use url) 128k $10.00 in, $ 30.00 out 2023-12
  • ChatGPT:gpt-4-turbo-preview Points to gpt-4-0125-preview (media tool-use url) 128k $10.00 in, $ 30.00 out 2023-12
  • ChatGPT:gpt-4o Advanced model for complex tasks; cheaper & faster than GPT-Turbo (media tool-use json url) 128k $ 2.50 in, $ 10.00 out 2023-10
  • ChatGPT:gpt-4o-mini Cheap model for fast tasks; cheaper & more capable than GPT-3.5 Turbo (media tool-use json url) 128k $ 0.15 in, $ 0.60 out 2023-10
  • ChatGPT:o1 Reasoning model designed to solve hard problems across domains (nosystem media reasoning) 200k $15.00 in, $ 60.00 out 2023-10
  • ChatGPT:o1-mini Faster and cheaper reasoning model good at coding, math, and science (nosystem reasoning) 128k $ 3.00 in, $ 12.00 out 2023-10
  • ChatGPT:o1-preview DEPRECATED: PLEASE USE o1 (nosystem media) 128k $15.00 in, $ 60.00 out 2023-10
  • ChatGPT:o3-mini High intelligence at the same cost and latency targets of o1-mini (nosystem reasoning) 200k $ 3.00 in, $ 12.00 out 2023-10
  • Ollama:deepseek-r1:14b
  • Ollama:llama3.1:8b
  • Ollama:phi4:14b

@acceleratesage
Copy link
Author

For what it's worth, I think there should be no backend defined by default. Some might not even want the possibility of sending their data to OpenAI without explicitly enabling it. Making it easy to integrate and not having to list all the models is great though.

@karthink
Copy link
Owner

This ideally goes into the README, it's not trivial to figure out this solution
to undefine a backend.

@benma There appears to be a bug with gptel-get-backend, it's not being recognized as a generalized variable in the normal flow of loading gptel.el. After I fix this bug I'll add it to the README.

@karthink
Copy link
Owner

For what it's worth, I think there should be no backend defined by default.

@acceleratesage Most gptel users are still using ChatGPT, best I can tell. When they update gptel and it stops working, I'm going to have a rough time with the flood of support requests.

Some might not even want the possibility of sending their data to OpenAI
without explicitly enabling it. Making it easy to integrate and not having to
list all the models is great though.

I'll settle for making the default backend easy to disable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants