Skip to main content

useOllama

Zero-config chat hook for Ollama. A convenience wrapper around useLocalLLM with Ollama defaults.

Signature

function useOllama(model: string, options?: OllamaOptions): LocalLLMResult;

Parameters

model

Type: string · Required

The Ollama model name (e.g. "gemma3:1b", "llama3.2", "mistral").

options

Type: OllamaOptions · Optional

PropertyTypeDefaultDescription
endpointstring"http://localhost:11434"Ollama server URL
systemstringSystem prompt prepended to conversation
temperaturenumberModel defaultSampling temperature
onToken(token: string) => voidCalled on each streamed token
onResponse(message: ChatMessage) => voidCalled when streaming completes
onError(error: Error) => voidCalled on error

Return value

Returns a LocalLLMResult object:

PropertyTypeDescription
messagesChatMessage[]Full conversation history
send(content: string) => voidSend a user message
isStreamingbooleanCurrently generating a response
isLoadingbooleanRequest in-flight (includes connection)
abort() => voidCancel current generation
errorError | nullMost recent error
clear() => voidClear conversation history

Examples

Basic chat

import { useOllama } from "use-local-llm";

function Chat() {
const { messages, send, isStreaming } = useOllama("gemma3:1b");

return (
<div>
{messages.map((m, i) => (
<p key={i}>
<b>{m.role}:</b> {m.content}
</p>
))}
<button onClick={() => send("Hello!")} disabled={isStreaming}>
Send
</button>
</div>
);
}

With system prompt

const { messages, send } = useOllama("mistral", {
system: "You are a pirate. Respond in pirate speak.",
temperature: 0.8,
});

With callbacks

const { messages, send } = useOllama("llama3.2", {
onToken: (token) => console.log("Token:", token),
onResponse: (msg) => console.log("Done:", msg.content),
onError: (err) => console.error("Error:", err),
});

Custom endpoint

const { messages, send } = useOllama("gemma3:1b", {
endpoint: "http://192.168.1.100:11434",
});

Source

src/hooks/useOllama.ts