Skip to main content

Endpoints

The endpoints module (src/utils/endpoints.ts) provides backend presets, API path mappings, and auto-detection logic.

ENDPOINTS

Default endpoint configurations for supported backends.

import { ENDPOINTS } from "use-local-llm";

ENDPOINTS.ollama;
// → { url: "http://localhost:11434", backend: "ollama" }

ENDPOINTS.lmstudio;
// → { url: "http://localhost:1234", backend: "lmstudio" }

ENDPOINTS.llamacpp;
// → { url: "http://localhost:8080", backend: "llamacpp" }

CHAT_PATHS

API paths for chat completions, mapped by backend.

import { CHAT_PATHS } from "use-local-llm";

CHAT_PATHS.ollama; // "/api/chat"
CHAT_PATHS.lmstudio; // "/v1/chat/completions"
CHAT_PATHS.llamacpp; // "/v1/chat/completions"
CHAT_PATHS["openai-compatible"]; // "/v1/chat/completions"

GENERATE_PATHS

API paths for text generation (non-chat), mapped by backend.

import { GENERATE_PATHS } from "use-local-llm";

GENERATE_PATHS.ollama; // "/api/generate"
GENERATE_PATHS.lmstudio; // "/v1/completions"
GENERATE_PATHS.llamacpp; // "/v1/completions"
GENERATE_PATHS["openai-compatible"]; // "/v1/completions"

MODEL_LIST_PATHS

API paths for listing available models, mapped by backend.

import { MODEL_LIST_PATHS } from "use-local-llm";

MODEL_LIST_PATHS.ollama; // "/api/tags"
MODEL_LIST_PATHS.lmstudio; // "/v1/models"
MODEL_LIST_PATHS.llamacpp; // "/v1/models"
MODEL_LIST_PATHS["openai-compatible"]; // "/v1/models"

detectBackend

Auto-detects the backend type from a URL based on the port number.

import { detectBackend } from "use-local-llm";

detectBackend("http://localhost:11434"); // "ollama"
detectBackend("http://localhost:1234"); // "lmstudio"
detectBackend("http://localhost:8080"); // "llamacpp"
detectBackend("http://localhost:3000"); // "openai-compatible"
detectBackend("http://my-server.com:5000"); // "openai-compatible"
detectBackend("invalid-url"); // "openai-compatible"

Port mapping

PortBackend
11434ollama
1234lmstudio
8080llamacpp
Any otheropenai-compatible

Full API path reference

BackendChatGenerateModels
OllamaPOST /api/chatPOST /api/generateGET /api/tags
LM StudioPOST /v1/chat/completionsPOST /v1/completionsGET /v1/models
llama.cppPOST /v1/chat/completionsPOST /v1/completionsGET /v1/models
OpenAI-compatiblePOST /v1/chat/completionsPOST /v1/completionsGET /v1/models