AI Open Source · 模型推理与部署
ollama/ollama
Ollama 提供本地一键拉起 LLM 的方式,覆盖 Llama、Qwen、Gemma、 DeepSeek、GLM 等常见开源模型。命令行 ollama run 就能起一个本地 OpenAI 兼容接口,研究者拿来做离线推理、隐私敏感数据处理很方便。
Get up and running with Kimi-K2.5, GLM-5, MiniMax, DeepSeek, gpt-oss, Qwen, Gemma and other models.
- Repo
- ollama/ollama
- Stars
- ★ 172k
- Language
- Go
- License
- MIT
- Last push
- 2d ago
- Created
- 2023-06-26
- Topics
- deepseekgemmagemma3glmgogolang
- Homepage
- https://ollama.com
README
Ollama
Start building with open models.
Download
macOS
curl -fsSL https://ollama.com/install.sh | sh
Windows
irm https://ollama.com/install.ps1 | iex
Linux
curl -fsSL https://ollama.com/install.sh | sh
Docker
The official Ollama Docker image ollama/ollama is available on Docker Hub.
Libraries
Community
Get started
ollama
You'll be prompted to run a model or connect Ollama to your existing agents or applications such as Claude Code, OpenClaw, OpenCode , Codex, Copilot, and more.
Coding
To launch a specific integration:
ollama launch claude
Supported integrations include Claude Code, Codex, Copilot CLI, Droid, and OpenCode.
AI assistant
Use OpenClaw to turn Ollama into a personal AI assistant across WhatsApp, Telegram, Slack, Discord, and more:
ollama launch openclaw
Chat with a model
Run and chat with Gemma 3:
ollama run gemma3
See ollama.com/library for the full list.
See the quickstart guide for more details.
REST API
Ollama has a REST API for running and managing models.
curl http://localhost:11434/api/chat -d '{
"model": "gemma3",
"messages": [{
"role": "user",
"content": "Why is the sky blue?"
}],
"stream": false
}'
See the API documentation for all endpoints.
Python
pip install ollama
from ollama import chat
response = chat(model='gemma3', messages=[
{
'role': 'user',
'content': 'Why is the sky blue?',
},
])
print(response.message.content)
JavaScript
npm i ollama
import ollama from "ollama";
const response = await ollama.chat({
model: "gemma3",
messages: [{ role: "user", content: "Why is the sky blue?" }],
});
console.log(response.message.content);
Supported backends
- llama.cpp project founded by Georgi Gerganov.
Documentation
Community Integrations
Want to add your project? Open a pull request.
Chat Interfaces
Web
- Open WebUI - Extensible, self-hosted AI interface
- Onyx - Connected AI workspace
- LibreChat - Enhanced ChatGPT clone with multi-provider support
- Lobe Chat - Modern chat framework with plugin ecosystem (docs)
- NextChat - Cross-platform ChatGPT UI (docs)
- Perplexica - AI-powered search engine, open-source Perplexity alternative
- big-AGI - AI suite for professionals
- Lollms WebUI - Multi-model web interface
- ChatOllama - Chatbot with knowledge bases
- Bionic GPT - On-premise AI platform
- Chatbot UI - ChatGPT-style web interface
- Hollama - Minimal web interface
- Chatbox - Desktop and web AI client
- chat - Chat web app for teams
- Ollama RAG Chatbot - Chat with multiple PDFs using RAG
- [Tkinter-based client](https://github.com/chyok/o
同一分类的其他项