AI Open Source · SDK 与开发工具
Helicone/helicone
开源的 LLM 可观测性平台,一行代码接入就能监控 API 调用、做评估和实验。 记录 prompt、响应、token、延迟、成本,并支持 A/B 测试和 prompt 版本管理。 研究里跑大量模型实验、生产环境追踪 Agent 行为时,作为统一的观测层。
🧊 Open source LLM observability platform. One line of code to monitor, evaluate, and experiment. YC W23 🍓
- Stars
- ★ 5.7k
- Language
- TypeScript
- License
- Apache-2.0
- Last push
- 3d ago
- Created
- 2023-01-31
- Topics
- agent-monitoringanalyticsevaluationgptlangchainlarge-language-models
- Homepage
- https://www.helicone.ai
README
| 🔍 Observability | 🕸️ Agent Tracing | 🚂 LLM Routing |
|---|---|---|
| 💰 Cost & Latency Tracking | 📚 Datasets & Fine-tuning | 🎛️ Automatic Fallbacks |
Helicone is an AI Gateway & LLM Observability Platform for AI Engineers
- 🌐 AI Gateway: Access 100+ AI models with 1 API key through the OpenAI API with intelligent routing and automatic fallbacks. Get started in 2 minutes.
- 🔌 Quick integration: One-line of code to log all your requests from OpenAI, Anthropic, LangChain, Gemini, Vercel AI SDK, and more.
- 📊 Observe: Inspect and debug traces & sessions for agents, chatbots, document processing pipelines, and more
- 📈 Analyze: Track metrics like cost, latency, quality, and more. Export to PostHog in one-line for custom dashboards
- 🎮 Playground: Rapidly test and iterate on prompts, sessions and traces in our UI.
- 🧠 Prompt Management: Version prompts using production data. Deploy prompts through the AI Gateway without code changes. Your prompts remain under your control, always accessible.
- 🎛️ Fine-tune: Fine-tune with one of our fine-tuning partners: OpenPipe or Autonomi (more coming soon)
- 🛡️ Enterprise Ready: SOC 2 and GDPR compliant
<img src="https://github.com/user-attachments/assets/e16332e9-d642-427e-b3ce-1a74a17f7b2c" alt="Open Sourced LLM Observability & AI Gateway Platform" width="600">🎁 Generous monthly free tier (10k requests/month) - No credit card required!
Quick Start ⚡️
-
Get your API key by signing up here and add credits at helicone.ai/credits
-
Update the
baseURLin your code and add your API key.import OpenAI from "openai"; const client = new OpenAI({ baseURL: "https://ai-gateway.helicone.ai", apiKey: process.env.HELICONE_API_KEY, }); const response = await client.chat.completions.create({ model: "gpt-4o-mini", // claude-sonnet-4, gemini-2.0-flash or any model from https://www.helicone.ai/models messages: [{ role: "user", content: "Hello!" }] }); -
🎉 You're all set! View your logs at Helicone and access 100+ models through one API.
Self-Hosting Open Source LLM Observability
Docker
Helicone is simple to self-host and update. To get started locally, just use our docker-compose file.
# Clone the repository
git clone https://github.com/Helicone/helicone.git
cd docker
cp .env.example .env
# Start the services
./helicone-compose.sh helicone up
Helm
For Enterprise workloads, we also have a production-ready Helm chart available. To access, contact us at enterprise@helicone.ai.
同一分类的其他项