Description
# Laminar Python Python SDK for [Laminar](https://www.lmnr.ai). [Laminar](https://www.lmnr.ai) is an open-source platform for engineering LLM products. Trace, evaluate, annotate, and analyze LLM data. Bring LLM applications to production with confidence. Check our [open-source repo](https://github.com/lmnr-ai/lmnr) and don't forget to star it ⭐ <a href="https://pypi.org/project/lmnr/">  </a>   ## Quickstart First, install the package, specifying the instrumentations you want to use. For example, to install the package with OpenAI and Anthropic instrumentations: ```sh pip install 'lmnr[anthropic,openai]' ``` To install all possible instrumentations, use the following command: ```sh pip install 'lmnr[all]' ``` Initialize Laminar in your code: ```python from lmnr import Laminar Laminar.initialize(project_api_key="<PROJECT_API_KEY>") ``` You can also skip passing the `project_api_key`, in which case it will be looked in the environment (or local .env file) by the key `LMNR_PROJECT_API_KEY`. Note that you need to only initialize Laminar once in your application. You should try to do that as early as possible in your application, e.g. at server startup. ## Set-up for self-hosting If you self-host a Laminar instance, the default connection settings to it are `http://localhost:8000` for HTTP and `http://localhost:8001` for gRPC. Initialize the SDK accordingly: ```python from lmnr import Laminar Laminar.initialize( project_api_key="<PROJECT_API_KEY>", base_url="http://localhost", http_port=8000, grpc_port=8001, ) ``` ## Instrumentation ### Manual instrumentation To instrument any function in your code, we provide a simple `@observe()` decorator. This can be useful if you want to trace a request handler or a function which combines multiple LLM calls. ```python import os from openai import OpenAI from lmnr import Laminar Laminar.initialize(project_api_key=os.environ["LMNR_PROJECT_API_KEY"]) client = OpenAI(api_key=os.environ["OPENAI_API_KEY"]) def poem_writer(topic: str): prompt = f"write a poem about {topic}" messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt}, ] # OpenAI calls are still automatically instrumented response = client.chat.completions.create( model="gpt-4o", messages=messages, ) poem = response.choices[0].message.content return poem @observe() def generate_poems(): poem1 = poem_writer(topic="laminar flow") poem2 = poem_writer(topic="turbulence") poems = f"{poem1}\n\n---\n\n{poem2}" return poems ``` Also, you can use `Laminar.start_as_current_span` if you want to record a chunk of your code using `with` statement. ```python def handle_user_request(topic: str): with Laminar.start_as_current_span(name="poem_writer", input=topic): poem = poem_writer(topic=topic) # Use set_span_output to record the output of the span Laminar.set_span_output(poem) ``` ### Automatic instrumentation Laminar allows you to automatically instrument majority of the most popular LLM, Vector DB, database, requests, and other libraries. If you want to automatically instrument a default set of libraries, then simply do NOT pass `instruments` argument to `.initialize()`. See the full list of available instrumentations in the [enum](https://github.com/lmnr-ai/lmnr-python/blob/main/src/lmnr/opentelemetry_lib/instruments.py). If you want to automatically instrument only specific LLM, Vector DB, or other calls with OpenTelemetry-compatible instrumentation, then pass the appropriate instruments to `.initialize()`. For example, if you want to only instrument OpenAI and Anthropic, then do the following: ```python from lmnr import Laminar, Instruments Laminar.initialize(project_api_key=os.environ["LMNR_PROJECT_API_KEY"], instruments={Instruments.OPENAI, Instruments.ANTHROPIC}) ``` If you want to fully disable any kind of autoinstrumentation, pass an empty set as `instruments=set()` to `.initialize()`. Autoinstrumentations are provided by Traceloop's [OpenLLMetry](https://github.com/traceloop/openllmetry). ## Evaluations ### Quickstart Install the package: ```sh pip install lmnr ``` Create a file named `my_first_eval.py` with the following code: ```python from lmnr import evaluate def write_poem(data): return f"This is a good poem about {data['topic']}" def contains_poem(output, target): return 1 if output in target['poem'] else 0 # Evaluation data data = [ {"data": {"topic": "flowers"}, "target": {"poem": "This is a good poem about flowers"}}, {"data": {"topic": "cars"}, "target": {"poem": "I like cars"}}, ] evaluate( data=data, executor=write_poem, evaluators={ "c
Release History
| Version | Changes | Urgency | Date |
|---|---|---|---|
| 0.7.47 | Imported from PyPI (0.7.47) | Low | 4/21/2026 |
