AI Open Source · SDK 与开发工具
openlit/openlit
AI 工程化的开源平台,基于 OpenTelemetry 做 LLM 可观测性,同时覆盖 GPU 监控、 Guardrails、评估、Prompt 管理、密钥仓和 Playground。对接 50+ 模型 provider、 向量库与 Agent 框架,适合需要把 LLM 调用链路接入既有 Grafana/ClickHouse 观测栈的团队。
Open source platform for AI Engineering: OpenTelemetry-native LLM Observability, GPU Monitoring, Guardrails, Evaluations, Prompt Management, Vault, Playground. 🚀💻 Integrates with 50+ LLM Providers, VectorDBs, Agent Frameworks and GPUs.
- Repo
- openlit/openlit
- Stars
- ★ 2.4k
- Language
- TypeScript
- License
- Apache-2.0
- Last push
- 1d ago
- Created
- 2024-01-23
- Topics
- ai-observabilityamd-gpuclickhousedistributed-tracinggenaigpu-monitoring
- Homepage
- https://docs.openlit.io
README
Observability, Evaluations, Rule Engine, Guardrails, Prompts, Vault, Playground, FleetHub
Open Source Platform for AI Engineering
Documentation | Quickstart | Python SDK | Typescript SDK | Go SDK |
❤️ Sponsor this project ❤️
</div>https://github.com/user-attachments/assets/6909bf4a-f5b4-4060-bde3-95e91fa36168
OpenLIT allows you to simplify your AI development workflow, especially for Generative AI and LLMs. It streamlines essential tasks like experimenting with LLMs, organizing and versioning prompts, and securely handling API keys. With just one line of code, you can enable OpenTelemetry-native observability, offering full-stack monitoring that includes LLMs, vector databases, and GPUs. This enables developers to confidently build AI features and applications, transitioning smoothly from testing to production.
This project proudly follows and maintains the Semantic Conventions with the OpenTelemetry community, consistently updating to align with the latest standards in Observability.
⚡ Features

-
📈 Analytics Dashboard: Monitor your AI application's health and performance with detailed dashboards that track metrics, costs, and user interactions, providing a clear view of overall efficiency.
-
🔌 OpenTelemetry-native Observability SDKs: Vendor-neutral SDKs (Python, TypeScript, Go) to send traces and metrics to your existing observability tools.
-
🛡️ 11 Built-in Evaluation Types: Automated LLM-as-a-Judge evaluation with hallucination, bias, toxicity, safety, instruction following, completeness, conciseness, sensitivity, relevance, coherence, and faithfulness detection. Context-aware evaluation that treats provided context as the source of truth.
-
⚙️ Rule Engine: Define conditional rules with AND/OR logic to match runtime trace attributes and dynamically retrieve contexts, prompts, and evaluation configs. SDK support across Python, TypeScript, and Go.
-
💲 Cost Tracking for Custom and Fine-Tuned Models: Tailor cost estimations for specific models using custom pricing files for precise budgeting.
-
🐛 Exceptions Monitoring Dashboard: Quickly spot and resolve issues by tracking common exceptions and errors with a dedicated monitoring dashboard.
-
💭 Prompt Management: Manage and version prompts using Prompt Hub for consistent and easy access across applications.
-
🔑 API Keys and Secrets Management: Securely handle your API keys and secrets centrally, avoiding insecure practices.
-
🎮 Experiment with different LLMs: Use OpenGround to explore, test and compare various LLMs side by side.
-
🚀 Fleet Hub for OpAMP Management: Centrally manage and monitor OpenTelemetry Collectors across your infrastructure using the OpAMP (Open Agent Management Protocol) with secure TLS communication.
🚀 Getting Started with LLM Observability
flowchart TB;
subgraph " "
direction LR;
subgraph " "
direction LR;
OpenLIT_SDK[OpenLIT SDK] -->|Sends Traces & Metrics| OTC[OpenTelemetry Collector];
OTC -->|Stores Data| ClickHouseDB[ClickHouse];
end
subgraph " "
direction RL;
OpenLIT_UI[OpenLIT] -->|Pulls Data| ClickHouseDB;
end
end
Step 1: Deploy OpenLIT Stack
-
Git Clone OpenLIT Repository
Open your command line or terminal and run:
git clone git@github.com:openlit/openlit.git -
S
同一分类的其他项