-
-
Notifications
You must be signed in to change notification settings - Fork 106
2.3.20 Satellite: OpenHands
Handle:
openhands
URL: http://localhost:34141
OpenHands agents can do anything a human developer can: modify code, run commands, browse the web, call APIs, and yes—even copy code snippets from StackOverflow.
# [Optional] pre-pull the image
harbor pull openhands
# [Optional] pre-pull the runtime image
docker pull docker.all-hands.dev/all-hands-ai/runtime:latest-nikolaik
# [Optional] Launch Harbor with LLM backends
# You can connect openhands to non-Harbor LLM backends otherwise
harbor up
# Navigate to the target directory
cd ./your-code-project
# Launch openhands in a given workspace
harbor openhands
# Open openhands in the browser
harbor open openhands
Unfortunately, due to the current state of the project, Harbor can only pre-configure the base URL for its Ollama instance in the openhands
, all other settings have to be done manually via OpenHands UI. Please open an issue in Harbor repo, if you see that this has changed.
OpenHands runs litellm
under the hood, so custom LLM configuration needs to be done according to the litellm documentation.
For example for the built-in Ollama instance:
-
harbor url -i ollama
- get the internal URL of theollama
service- Add
/v1
to the end of the URL -http://ollama:11434/v1
- Add
- Get the model ID you want to use, prefix it with
openai/
- For example,
qwen2.5:7b-instruct-q8_0
becomesopenai/qwen2.5:7b-instruct-q8_0
- For example,
OpenHands requires a fairly powerful LLM with tool support to accomplish its tasks. If you're using ollama, you can see compatible models here. Aim for as large of a model as you can, as OpenHands is a very demanding service. You should also ensure that you can run the model with ~16k context length or more to ensure that all needed context reaches the model. You can expect reasonable results from 70B+ models.