Documentation Index
Fetch the complete documentation index at: https://developers.telnyx.com/llms.txt
Use this file to discover all available pages before exploring further.
Observability
Observability gives you full visibility into your AI assistant’s behavior. By connecting your assistant to Langfuse, you can trace every LLM call, tool execution, and conversation turn — including input messages, output responses, token usage, latency, and cost. In this tutorial, you will learn how to:- Connect your AI assistant to Langfuse for LLM observability
- Store your Langfuse credentials securely as integration secrets
- View traces, generations, and tool calls in the Langfuse dashboard
- Understand how conversations are grouped in traces
Overview
When observability is enabled on an assistant, every interaction is automatically traced and sent to your Langfuse project. This includes:| What is traced | Where it happens | Details captured |
|---|---|---|
| LLM generations | AI Conversations | Input messages, output response, model, token usage |
| Tool calls | AI Assistants | Tool name, input arguments, output result |
conversation_id. This means all LLM calls and tool executions within the same conversation appear together in your Langfuse dashboard.
Key benefits
- Debugging: Inspect the exact messages sent to the LLM and the responses received.
- Cost tracking: Monitor token usage per conversation, assistant, or model.
- Quality evaluation: Review LLM outputs and tool call results to identify issues.
- Latency analysis: Measure response times for LLM calls and tool executions.
- Multi-tenant: Each assistant can connect to a different Langfuse project with its own credentials.
Requirements
Before you begin, you will need:- A Langfuse account (cloud or self-hosted)
- A Langfuse project with a public key and secret key
- A Telnyx AI Assistant
Configuration
Step 1: Create your Langfuse credentials
Log in to your Langfuse dashboard and navigate to Settings > API Keys. Create a new API key pair. You will need:| Credential | Description | Example |
|---|---|---|
| Public Key | Identifies your Langfuse project | pk-lf-abc123... |
| Secret Key | Authenticates requests to Langfuse | sk-lf-xyz789... |
| Host | Your Langfuse instance URL | https://cloud.langfuse.com |
Step 2: Store credentials as integration secrets
Your Langfuse keys must be stored securely as Telnyx integration secrets. Navigate to the Integration Secrets tab in the portal. Create two secrets:- Langfuse Secret Key — store your Langfuse secret key as the secret value. Choose a memorable identifier (e.g.,
langfuse-secret-key). - Langfuse Public Key — store your Langfuse public key as the secret value. Choose a memorable identifier (e.g.,
langfuse-public-key).
You will not be able to access the value of a secret after it is stored.
Step 3: Enable observability on your assistant
You can enable observability via the API when creating or updating an assistant:Disabling observability
To stop tracing, update the status todisabled:
Linking a Langfuse-managed prompt
In addition to tracing, you can link your assistant to a prompt managed in Langfuse. This lets you iterate on the assistant’s instructions in Langfuse and reference them by version or label, and optionally have Telnyx publish the assistant’s instructions back to Langfuse on every save.Pin to a prompt version or label
Setprompt_name together with either prompt_version (an integer pinning to an exact version) or prompt_label (e.g. "production", pinning to whichever version currently carries that label). The two are mutually exclusive — Langfuse pins by one or the other, never both.
Auto-publish the assistant’s instructions
Setprompt_sync to enabled to automatically publish the assistant’s instructions back to Langfuse as a prompt on every create or update. Telnyx calls Langfuse’s create-prompt API and stores the returned version in prompt_version, so the assistant continues to run on the exact instructions you just saved.
prompt_sync requires prompt_name. It is independent of prompt_version / prompt_label pinning — sync only controls whether Telnyx publishes the instructions; pinning controls which Langfuse version the assistant runs on.
Observability settings reference
| Field | Type | Required when enabled | Description |
|---|---|---|---|
status | string | Yes | enabled or disabled |
secret_key_ref | string | Yes | Integration secret identifier for your Langfuse secret key |
public_key_ref | string | Yes | Integration secret identifier for your Langfuse public key |
host | string | Yes | Your Langfuse instance URL |
prompt_name | string | No | Name of a Langfuse-managed prompt to link. Required when prompt_version, prompt_label, or prompt_sync is set |
prompt_version | integer | No | Pin the assistant to an exact prompt version (≥ 1). Mutually exclusive with prompt_label |
prompt_label | string | No | Pin the assistant to a labeled prompt (e.g. "production"). Mutually exclusive with prompt_version |
prompt_sync | string | No | enabled or disabled (default disabled). When enabled, publishes the assistant’s instructions to Langfuse on every save and stores the returned version in prompt_version. Requires prompt_name |
When status is
enabled, all three credential fields are required. The API will return an error if any are missing. The secret references are validated to ensure they exist in your integration secrets.What you will see in Langfuse
Once observability is enabled and your assistant handles a conversation, traces will appear in your Langfuse dashboard.Traces
Each conversation turn generates a trace. Traces from the same conversation share a deterministic ID derived from theconversation_id, so they are grouped together in the Langfuse UI.
Each trace includes:
- Name: The conversation name (if set), otherwise
chat - Metadata:
conversation_idandassistant_id
Generations
Each LLM call appears as a generation observation within the trace. Generations include:- Model: The LLM model used (e.g.,
anthropic/claude-haiku-4-5) - Input: The full message array sent to the model, including system prompt and conversation history
- Output: The model’s response content
- Token usage: Prompt tokens, completion tokens, and total tokens (non-streaming only)
Tool calls
When your assistant uses webhook tools, each tool execution appears as an event within the trace. Events include:- Name:
tool-call-{tool_name} - Input: The tool call arguments
- Output: The tool response
Best practices
Security
- Never share your Langfuse keys directly — always store them as Telnyx integration secrets.
- Use separate Langfuse projects for development and production assistants.
- Rotate keys periodically — update the integration secrets and the assistant configuration when you rotate Langfuse API keys.
Performance
- Observability adds minimal overhead. Traces are sent asynchronously and do not block conversation flow.
- If you are self-hosting Langfuse, ensure your instance is reachable from Telnyx infrastructure.
Organization
- Use conversation names to make traces easier to find in the Langfuse dashboard. Conversation names are set automatically and appear as the trace name.
- Filter by metadata in Langfuse to find traces for a specific
conversation_idorassistant_id.
Troubleshooting
Traces not appearing in Langfuse
- Verify status is enabled: Check that
observability_settings.statusis"enabled"on your assistant. - Verify credentials: Ensure your
secret_key_refandpublic_key_refpoint to valid integration secrets with correct Langfuse keys. - Check the host URL: Confirm the
hostfield matches your Langfuse instance (e.g.,https://cloud.langfuse.comfor Langfuse Cloud). - Check Langfuse project: Verify you are looking at the correct project in the Langfuse dashboard.
Missing output or token usage
- Token usage is captured for non-streaming LLM calls. Streaming calls may not include token counts depending on the model provider.
- Output is captured after the LLM response completes. If a call fails mid-stream, the output may be empty.
Secret reference errors
If you receive an error likesecret_key_ref not found, ensure:
- The integration secret exists in your Integration Secrets.
- The identifier in
secret_key_reforpublic_key_refexactly matches the secret name you created. - The secret belongs to the same organization as the assistant.
Observability not working after key rotation
If you rotated your Langfuse API keys:- Update the integration secret values in the portal.
- The assistant will automatically use the new values on the next conversation — no assistant update is required.