This project demonstrates a Streamlit application for interacting with various small language models via the Hugging Face API. The application allows users to chat with different models and upload documents for question answering.
- Chat with multiple language models (Llama 3.2, Phi-3.5, Gemma, DeepSeek)
- Upload and process documents (PDF, DOCX, TXT)
- Document-based question answering
- Adjustable model parameters (temperature, top-p, max length)
- Python 3.8 or higher
- Hugging Face API token
-
Clone the repository:
git clone https://github.com/yourusername/slm-poc.git cd slm-poc
-
Create a virtual environment and install dependencies:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate pip install -r requirements.txt
-
Create a
.env
file with your Hugging Face API token:HF_TOKEN=hf_your_token_here
streamlit run app.py
- Select a model from the dropdown in the sidebar
- Adjust model parameters if needed
- Start chatting with the model
- Optionally upload a document for document-based question answering
app.py
: Main Streamlit applicationconfig/
: Configuration settingsmodels/
: Model implementationsutils/
: Utility functionscomponents/
: UI components
To add a new model:
- Add the model details to
config/settings.py
- Ensure the model is compatible with the Hugging Face API
This project is licensed under the MIT License - see the LICENSE file for details.