Convenience wrapper for Gemini Nano running in Chrome preview builds with AI features switched on.
Optimized for instant and high-frequency inference. Currently, window.ai does not allow queueing and will cancel any ongoing request when a new request comes in. This package implements a queue that waits for the current request to finish before starting a new one. You can also clear the queue if you want to cancel all current and pending inference requests.
- Streaming response from the model
- Queueing of prompts
- Automatic canceling
useNano
hook for React that immediately submits a prompt and streams the output- Type declarations for Chrome's new
window.ai
object
Make sure you're running Chrome Dev or Chrome Canary with AI features enabled.
npm install https://github.com/freakyflow/use-nano
Creating a simple, instant inference UI:
export default function TestPage() {
const [input, setInput] = useState("Who are you?");
const output = useNano(input);
return (
<div>
<input value={input}
onChange={(e) => setInput(e.target.value)} />
<div>{output}</div>
</div>
);
}
const output = useNano("Who are you?", { clearQueue: true });
promptQueue.enqueue("Who are you?", response => console.log(response));
promptQueue.clear();