Skip to content

PlayWithLLM

PlayWithLLM is a powerful and user-friendly system designed to streamline interactions with Ollama-hosted large language models (LLMs). This tool allows you to send inference requests to Ollama state models via a simple API, making it easier to integrate and experiment with LLMs in your projects.

Key Features

  • API Integration: Exposes Ollama-hosted LLMs through an API, enabling seamless inference requests
  • Request History Tracking: Automatically stores all inference requests, including details like token usage, request logs, costs, and duration
  • Admin UI: Provides an intuitive interface to monitor and manage all your LLM interactions in one place
  • Local Setup: Easy-to-follow instructions to clone, install, and run the system on your local machine

Use Cases

  • Developers: Quickly integrate and test Ollama-hosted LLMs in your applications
  • Researchers: Track and analyze inference requests for experiments and studies
  • AI Enthusiasts: Explore the capabilities of state-of-the-art language models with minimal setup

Getting Started

To get started with PlayWithLLM, follow the installation and setup instructions below:

PlayWithLLM Setup Guide

For more detailed documentation, please refer to the resources provided in this repository.

Pinned Loading

  1. admin-ui admin-ui Public

    TypeScript 3

  2. server server Public template

    JavaScript 3

Repositories

Showing 4 of 4 repositories

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Top languages

Loading…

Most used topics

Loading…