Skip to main content

Documentation Index

Fetch the complete documentation index at: https://developers.telnyx.com/llms.txt

Use this file to discover all available pages before exploring further.

Observability

Observability gives you full visibility into your AI assistant’s behavior. By connecting your assistant to Langfuse, you can trace every LLM call, tool execution, and conversation turn — including input messages, output responses, token usage, latency, and cost. In this tutorial, you will learn how to:
  • Connect your AI assistant to Langfuse for LLM observability
  • Store your Langfuse credentials securely as integration secrets
  • View traces, generations, and tool calls in the Langfuse dashboard
  • Understand how conversations are grouped in traces

Overview

When observability is enabled on an assistant, every interaction is automatically traced and sent to your Langfuse project. This includes:
What is tracedWhere it happensDetails captured
LLM generationsAI ConversationsInput messages, output response, model, token usage
Tool callsAI AssistantsTool name, input arguments, output result
Traces are grouped by conversation using a deterministic trace ID derived from the conversation_id. This means all LLM calls and tool executions within the same conversation appear together in your Langfuse dashboard.

Key benefits

  • Debugging: Inspect the exact messages sent to the LLM and the responses received.
  • Cost tracking: Monitor token usage per conversation, assistant, or model.
  • Quality evaluation: Review LLM outputs and tool call results to identify issues.
  • Latency analysis: Measure response times for LLM calls and tool executions.
  • Multi-tenant: Each assistant can connect to a different Langfuse project with its own credentials.

Requirements

Before you begin, you will need:
  1. A Langfuse account (cloud or self-hosted)
  2. A Langfuse project with a public key and secret key
  3. A Telnyx AI Assistant

Configuration

Step 1: Create your Langfuse credentials

Log in to your Langfuse dashboard and navigate to Settings > API Keys. Create a new API key pair. You will need:
CredentialDescriptionExample
Public KeyIdentifies your Langfuse projectpk-lf-abc123...
Secret KeyAuthenticates requests to Langfusesk-lf-xyz789...
HostYour Langfuse instance URLhttps://cloud.langfuse.com

Step 2: Store credentials as integration secrets

Your Langfuse keys must be stored securely as Telnyx integration secrets. Navigate to the Integration Secrets tab in the portal. Create two secrets:
  1. Langfuse Secret Key — store your Langfuse secret key as the secret value. Choose a memorable identifier (e.g., langfuse-secret-key).
  2. Langfuse Public Key — store your Langfuse public key as the secret value. Choose a memorable identifier (e.g., langfuse-public-key).
You will not be able to access the value of a secret after it is stored.

Step 3: Enable observability on your assistant

You can enable observability via the API when creating or updating an assistant:
curl --request POST \
  --url https://api.telnyx.com/v2/ai/assistants \
  --header "Authorization: Bearer $TELNYX_API_KEY" \
  --header 'Content-Type: application/json' \
  --data '{
    "name": "My Observable Assistant",
    "model": "anthropic/claude-haiku-4-5",
    "instructions": "You are a helpful assistant.",
    "observability_settings": {
      "status": "enabled",
      "secret_key_ref": "langfuse-secret-key",
      "public_key_ref": "langfuse-public-key",
      "host": "https://cloud.langfuse.com"
    }
  }'
To update an existing assistant:
curl --request POST \
  --url https://api.telnyx.com/v2/ai/assistants/{assistant_id} \
  --header "Authorization: Bearer $TELNYX_API_KEY" \
  --header 'Content-Type: application/json' \
  --data '{
    "observability_settings": {
      "status": "enabled",
      "secret_key_ref": "langfuse-secret-key",
      "public_key_ref": "langfuse-public-key",
      "host": "https://cloud.langfuse.com"
    }
  }'

Disabling observability

To stop tracing, update the status to disabled:
curl --request POST \
  --url https://api.telnyx.com/v2/ai/assistants/{assistant_id} \
  --header "Authorization: Bearer $TELNYX_API_KEY" \
  --header 'Content-Type: application/json' \
  --data '{
    "observability_settings": {
      "status": "disabled"
    }
  }'

Linking a Langfuse-managed prompt

In addition to tracing, you can link your assistant to a prompt managed in Langfuse. This lets you iterate on the assistant’s instructions in Langfuse and reference them by version or label, and optionally have Telnyx publish the assistant’s instructions back to Langfuse on every save.

Pin to a prompt version or label

Set prompt_name together with either prompt_version (an integer pinning to an exact version) or prompt_label (e.g. "production", pinning to whichever version currently carries that label). The two are mutually exclusive — Langfuse pins by one or the other, never both.
curl --request POST \
  --url https://api.telnyx.com/v2/ai/assistants/{assistant_id} \
  --header "Authorization: Bearer $TELNYX_API_KEY" \
  --header 'Content-Type: application/json' \
  --data '{
    "observability_settings": {
      "status": "enabled",
      "secret_key_ref": "langfuse-secret-key",
      "public_key_ref": "langfuse-public-key",
      "host": "https://cloud.langfuse.com",
      "prompt_name": "support-agent",
      "prompt_label": "production"
    }
  }'

Auto-publish the assistant’s instructions

Set prompt_sync to enabled to automatically publish the assistant’s instructions back to Langfuse as a prompt on every create or update. Telnyx calls Langfuse’s create-prompt API and stores the returned version in prompt_version, so the assistant continues to run on the exact instructions you just saved.
curl --request POST \
  --url https://api.telnyx.com/v2/ai/assistants/{assistant_id} \
  --header "Authorization: Bearer $TELNYX_API_KEY" \
  --header 'Content-Type: application/json' \
  --data '{
    "observability_settings": {
      "status": "enabled",
      "secret_key_ref": "langfuse-secret-key",
      "public_key_ref": "langfuse-public-key",
      "host": "https://cloud.langfuse.com",
      "prompt_name": "support-agent",
      "prompt_sync": "enabled"
    }
  }'
prompt_sync requires prompt_name. It is independent of prompt_version / prompt_label pinning — sync only controls whether Telnyx publishes the instructions; pinning controls which Langfuse version the assistant runs on.

Observability settings reference

FieldTypeRequired when enabledDescription
statusstringYesenabled or disabled
secret_key_refstringYesIntegration secret identifier for your Langfuse secret key
public_key_refstringYesIntegration secret identifier for your Langfuse public key
hoststringYesYour Langfuse instance URL
prompt_namestringNoName of a Langfuse-managed prompt to link. Required when prompt_version, prompt_label, or prompt_sync is set
prompt_versionintegerNoPin the assistant to an exact prompt version (≥ 1). Mutually exclusive with prompt_label
prompt_labelstringNoPin the assistant to a labeled prompt (e.g. "production"). Mutually exclusive with prompt_version
prompt_syncstringNoenabled or disabled (default disabled). When enabled, publishes the assistant’s instructions to Langfuse on every save and stores the returned version in prompt_version. Requires prompt_name
When status is enabled, all three credential fields are required. The API will return an error if any are missing. The secret references are validated to ensure they exist in your integration secrets.

What you will see in Langfuse

Once observability is enabled and your assistant handles a conversation, traces will appear in your Langfuse dashboard.

Traces

Each conversation turn generates a trace. Traces from the same conversation share a deterministic ID derived from the conversation_id, so they are grouped together in the Langfuse UI. Each trace includes:
  • Name: The conversation name (if set), otherwise chat
  • Metadata: conversation_id and assistant_id

Generations

Each LLM call appears as a generation observation within the trace. Generations include:
  • Model: The LLM model used (e.g., anthropic/claude-haiku-4-5)
  • Input: The full message array sent to the model, including system prompt and conversation history
  • Output: The model’s response content
  • Token usage: Prompt tokens, completion tokens, and total tokens (non-streaming only)

Tool calls

When your assistant uses webhook tools, each tool execution appears as an event within the trace. Events include:
  • Name: tool-call-{tool_name}
  • Input: The tool call arguments
  • Output: The tool response

Best practices

Security

  • Never share your Langfuse keys directly — always store them as Telnyx integration secrets.
  • Use separate Langfuse projects for development and production assistants.
  • Rotate keys periodically — update the integration secrets and the assistant configuration when you rotate Langfuse API keys.

Performance

  • Observability adds minimal overhead. Traces are sent asynchronously and do not block conversation flow.
  • If you are self-hosting Langfuse, ensure your instance is reachable from Telnyx infrastructure.

Organization

  • Use conversation names to make traces easier to find in the Langfuse dashboard. Conversation names are set automatically and appear as the trace name.
  • Filter by metadata in Langfuse to find traces for a specific conversation_id or assistant_id.

Troubleshooting

Traces not appearing in Langfuse

  • Verify status is enabled: Check that observability_settings.status is "enabled" on your assistant.
  • Verify credentials: Ensure your secret_key_ref and public_key_ref point to valid integration secrets with correct Langfuse keys.
  • Check the host URL: Confirm the host field matches your Langfuse instance (e.g., https://cloud.langfuse.com for Langfuse Cloud).
  • Check Langfuse project: Verify you are looking at the correct project in the Langfuse dashboard.

Missing output or token usage

  • Token usage is captured for non-streaming LLM calls. Streaming calls may not include token counts depending on the model provider.
  • Output is captured after the LLM response completes. If a call fails mid-stream, the output may be empty.

Secret reference errors

If you receive an error like secret_key_ref not found, ensure:
  1. The integration secret exists in your Integration Secrets.
  2. The identifier in secret_key_ref or public_key_ref exactly matches the secret name you created.
  3. The secret belongs to the same organization as the assistant.

Observability not working after key rotation

If you rotated your Langfuse API keys:
  1. Update the integration secret values in the portal.
  2. The assistant will automatically use the new values on the next conversation — no assistant update is required.