Mirascope LLM analytics installation

  1. Install dependencies

    Required
    Full working examples

    See the complete Python example on GitHub. If you're using the PostHog SDK wrapper instead of OpenTelemetry, see the Python wrapper example.

    Install the OpenTelemetry SDK, the OpenAI instrumentation, and Mirascope.

    pip install "mirascope[openai]" opentelemetry-sdk opentelemetry-exporter-otlp-proto-http opentelemetry-instrumentation-openai-v2
  2. Set up OpenTelemetry tracing

    Required

    Configure OpenTelemetry to auto-instrument OpenAI SDK calls and export traces to PostHog. PostHog converts gen_ai.* spans into $ai_generation events automatically.

    from opentelemetry import trace
    from opentelemetry.sdk.trace import TracerProvider
    from opentelemetry.sdk.trace.export import SimpleSpanProcessor
    from opentelemetry.sdk.resources import Resource, SERVICE_NAME
    from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
    from opentelemetry.instrumentation.openai_v2 import OpenAIInstrumentor
    resource = Resource(attributes={
    SERVICE_NAME: "my-app",
    "posthog.distinct_id": "user_123", # optional: identifies the user in PostHog
    "foo": "bar", # custom properties are passed through
    })
    exporter = OTLPSpanExporter(
    endpoint="https://us.i.posthog.com/i/v0/ai/otel",
    headers={"Authorization": "Bearer <ph_project_token>"},
    )
    provider = TracerProvider(resource=resource)
    provider.add_span_processor(SimpleSpanProcessor(exporter))
    trace.set_tracer_provider(provider)
    OpenAIInstrumentor().instrument()
  3. Call your LLMs

    Required

    Use Mirascope as normal. PostHog automatically captures an $ai_generation event for each LLM call made through the OpenAI SDK that Mirascope uses internally.

    from mirascope.core import openai, prompt_template
    @openai.call("gpt-4o-mini")
    @prompt_template("Tell me a fun fact about {topic}")
    def fun_fact(topic: str): ...
    response = fun_fact("hedgehogs")
    print(response.content)

    Note: If you want to capture LLM events anonymously, omit the posthog.distinct_id resource attribute. See our docs on anonymous vs identified events to learn more.

    You can expect captured $ai_generation events to have the following properties:

    PropertyDescription
    $ai_modelThe specific model, like gpt-5-mini or claude-4-sonnet
    $ai_latencyThe latency of the LLM call in seconds
    $ai_time_to_first_tokenTime to first token in seconds (streaming only)
    $ai_toolsTools and functions available to the LLM
    $ai_inputList of messages sent to the LLM
    $ai_input_tokensThe number of tokens in the input (often found in response.usage)
    $ai_output_choicesList of response choices from the LLM
    $ai_output_tokensThe number of tokens in the output (often found in response.usage)
    $ai_total_cost_usdThe total cost in USD (input + output)
    [...]See full list of properties
  4. Verify traces and generations

    Recommended
    Confirm LLM events are being sent to PostHog

    Let's make sure LLM events are being captured and sent to PostHog. Under LLM analytics, you should see rows of data appear in the Traces and Generations tabs.


    LLM generations in PostHog
    Check for LLM events in PostHog
  5. Next steps

    Recommended

    Now that you're capturing AI conversations, continue with the resources below to learn what else LLM Analytics enables within the PostHog platform.

    ResourceDescription
    BasicsLearn the basics of how LLM calls become events in PostHog.
    GenerationsRead about the $ai_generation event and its properties.
    TracesExplore the trace hierarchy and how to use it to debug LLM calls.
    SpansReview spans and their role in representing individual operations.
    Anaylze LLM performanceLearn how to create dashboards to analyze LLM performance.

Community questions

Was this page useful?

Questions about this page? or post a community question.