Sign up (with export icon)

CKEditor AI On-Premises – Observability

Show the table of contents

CKEditor AI On-Premises supports OpenTelemetry instrumentation, giving you full visibility into the service’s runtime behavior. This includes AI interactions with LLM providers, token usage, response quality, as well as HTTP requests and database operations.

Telemetry data can be exported to any OTLP-compatible backend (such as Jaeger, Grafana Tempo, or Datadog) and, optionally, to Langfuse for AI-specific observability. Both exports can run simultaneously.

Note

If you would like to use an observability platform other than Langfuse for AI-specific insights, please contact us and we can enable support for your preferred provider.

Configuration

Copy link

OpenTelemetry is enabled by setting LLM_TELEMETRY_ENABLED to true and providing the OTEL_EXPORTER_OTLP_TRACES_ENDPOINT environment variable. When both are configured, the service starts collecting and exporting trace data.

OTLP exporter

Copy link

The following environment variables control the OTLP trace export:

LLM_TELEMETRY_ENABLED=[true|false]
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=[OTLP_TRACES_ENDPOINT_URL]
OTEL_TRACES_SAMPLER_ARG=[SAMPLING_RATE]
OTEL_DEBUG=[true|false]
Copy code

Where:

  • LLM_TELEMETRY_ENABLED (required, default: false) – set to true to enable LLM telemetry collection. Without this, no telemetry data is collected or exported.
  • OTEL_EXPORTER_OTLP_TRACES_ENDPOINT (required) – the URL of the OTLP-compatible endpoint to send trace data to (for example, http://jaeger:4318/v1/traces).
  • OTEL_TRACES_SAMPLER_ARG (optional, default: 1.0) – the sampling rate as a float between 0.0 and 1.0. A value of 1.0 means all traces are captured, 0.5 means roughly half, and so on. Lowering the sampling rate reduces the volume of exported data in high-traffic environments.
  • OTEL_DEBUG (optional) – when set to any truthy value, enables verbose OpenTelemetry diagnostic logging to stdout. Useful for troubleshooting OTLP export issues.

Langfuse

Copy link

Langfuse is an open-source LLM observability platform that provides detailed dashboards for monitoring AI interactions, token usage, costs, and response quality. CKEditor AI On-Premises integrates with Langfuse natively through a dedicated span processor.

Langfuse integration is enabled when both LANGFUSE_PUBLIC_KEY and LANGFUSE_SECRET_KEY are set:

LANGFUSE_PUBLIC_KEY=[YOUR_LANGFUSE_PUBLIC_KEY]
LANGFUSE_SECRET_KEY=[YOUR_LANGFUSE_SECRET_KEY]
LANGFUSE_BASE_URL=[LANGFUSE_URL]
LANGFUSE_DEBUG=[true|false]
Copy code

Where:

  • LANGFUSE_PUBLIC_KEY (required) – the public API key from your Langfuse project.
  • LANGFUSE_SECRET_KEY (required) – the secret API key from your Langfuse project.
  • LANGFUSE_BASE_URL (optional, default: https://cloud.langfuse.com) – the base URL of your Langfuse instance. Override this when using a self-hosted Langfuse deployment.
  • LANGFUSE_DEBUG (optional) – when set to any truthy value, enables verbose diagnostic logging for Langfuse trace export. Unlike OTEL_DEBUG, which covers the general OpenTelemetry pipeline, this option focuses specifically on the Langfuse span processor.
Note

Langfuse integration requires both LLM_TELEMETRY_ENABLED set to true and the OTEL_EXPORTER_OTLP_TRACES_ENDPOINT variable to be set, since they are the main switches that enable OpenTelemetry instrumentation in the service.

Using OTLP and Langfuse together

Copy link

The OTLP exporter and Langfuse span processor run as independent export pipelines. This means you can:

  • Use only OTLP – to send all traces to a general-purpose backend like Jaeger or Grafana Tempo.
  • Use OTLP with Langfuse – to get both infrastructure-level tracing and AI-specific insights in Langfuse simultaneously.

When both are configured, traces are sent to both destinations in parallel.

Docker example

Copy link

Below is an example of running CKEditor AI On-Premises with OpenTelemetry enabled and connected to Langfuse Cloud:

docker run --init -p 8000:8000 \
    -e LICENSE_KEY=[your license key] \
    -e ENVIRONMENTS_MANAGEMENT_SECRET_KEY=[your management secret key] \
    -e DATABASE_DRIVER=[mysql|postgres] \
    -e DATABASE_HOST=[your database host] \
    -e DATABASE_USER=[your database user] \
    -e DATABASE_PASSWORD=[your database password] \
    -e DATABASE_DATABASE=[your database name] \
    -e REDIS_HOST=[your redis host] \
    -e PROVIDERS='{"openai":{"type":"openai","apiKeys":["your-api-key"]}}' \
    -e STORAGE_DRIVER=[s3|azure|filesystem|database] \
    -e LLM_TELEMETRY_ENABLED="true" \
    -e OTEL_EXPORTER_OTLP_TRACES_ENDPOINT="http://your-otlp-collector:4318/v1/traces" \
    -e LANGFUSE_PUBLIC_KEY="pk-lf-..." \
    -e LANGFUSE_SECRET_KEY="sk-lf-..." \
    docker.cke-cs.com/ai-service:[version]
Copy code

Environment variables reference

Copy link
Variable Required Default Description
LLM_TELEMETRY_ENABLED Yes false Enables LLM telemetry collection. Must be set to true.
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT Yes OTLP endpoint URL for trace export.
OTEL_TRACES_SAMPLER_ARG No 1.0 Trace sampling rate (0.01.0).
OTEL_DEBUG No Enables verbose OpenTelemetry diagnostic logging.
LANGFUSE_PUBLIC_KEY No Langfuse public API key. Required for Langfuse integration.
LANGFUSE_SECRET_KEY No Langfuse secret API key. Required for Langfuse integration.
LANGFUSE_BASE_URL No https://cloud.langfuse.com Base URL of the Langfuse instance.
LANGFUSE_DEBUG No Enables verbose diagnostic logging for Langfuse trace export.