Imagine having a personal AI assistant at your fingertips, ready to answer your questions, provide information, and even entertain you. Welcome to Local AI Assistant, a Streamlit app that harnesses the power of Ollama (for serving LLMs locally) and LangChain to bring conversational AI to your local machine.
To use this project, you need to install Ollama on your system. Follow these steps:
-
Install the Ollama CLI by running the following command in your terminal:
pip install ollama
-
Verify the installation by running:
ollama --version
Pull the required models for the project by running:
ollama pull ${modelName}
Clone this repository using Git:
git clone git@github.com:taofiqsulayman/local-ai-assistant.git
Create a new virtual environment for the project:
python -m venv venv
Activate the virtual environment:
-
On Windows:
venv\Scripts\activate
-
On macOS/Linux:
source venv/bin/activate
Install the required dependencies by running:
pip install -r requirements.txt
Run the Streamlit app by executing:
streamlit run app.py
- Select a model from the dropdown list.
- Type your message in the chat input field.
- Press Enter to send the message and receive a response from the AI assistant.
- If you encounter any issues with Ollama, refer to the Ollama documentation.
- For Streamlit-related issues, check the Streamlit documentation.
Contributions are welcome! If you'd like to contribute to this project, please fork the repository, make changes, and submit a pull request.