NEWCKEditor AI is here! Learn how to supercharge your editor with AI on our webinar.
Sign up (with export icon)

CKEditor AI On-Premises logs

Show the table of contents
Note

CKEditor AI On-Premises is in early access. Please note that some functionalities may be changed or not work as expected.
Also, selected capabilities available on SaaS are not available for CKEditor AI On-Premises yet.

The logs from CKEditor AI On-Premises are written to stdout and stderr. Most of them are formatted in JSON. They can be used for monitoring or debugging purposes. In production environments, we recommend storing the logs to files or using a distributed logging system (like ELK or CloudWatch).

Monitoring CKEditor AI with logs

Copy link

To get more insight into how the CKEditor AI On-Premises is performing, we built logs that can be used for monitoring. To enable these, just add the ENABLE_METRIC_LOGS=true environment variable.

Log structure

Copy link

The log structure contains the following information:

  • handler – A unified identifier of action. Use this field to identify calls.
  • traceId – A unique RPC call ID.
  • tags – A semicolon-separated list of tags. Use this field to filter metrics logs.
  • data – An object containing additional information. It might vary between different transports.
  • data.duration – The request duration in milliseconds.
  • data.transport – The type of the request transport. It could be http or ws (websocket).
  • data.status – The request status. It can be equal to success, fail, or warning.
  • data.statusCode – The response status in the HTTP status code standard.

Additionally, for the HTTP transport, the following information is included:

  • data.url – The URL path.
  • data.method – The request method.

In case of an error, data.status will be equal to failed and data.message will contain the error message.

An example log for HTTP transport:

{
    "level": 30,
    "time": "2021-03-09T11:15:09.154Z",
    "msg": "Request summary",
    "handler": "ai-service",
    "traceId": "85f13d92-57df-4b3b-98bb-0ca41a5ae601",
    "data": {
        "duration": 2470,
        "transport": "http",
        "statusCode": 200,
        "status": "success",
        "url": "/assets",
        "method": "POST"
    },
    "tags": "metrics"
}
Copy code
Note

See example charts to check how to use logs for monitoring purposes.

Docker

Copy link

Docker has a built-in logging mechanisms that capture logs from the containers’ output. The default logging driver writes these logs to files.

When using this driver, you can use the docker logs command to show logs from a container. You can add the -f flag to view logs in real time. Refer to the official Docker documentation for more information about the logs command.

Note

When a container is running for a long period of time, the logs can take up a lot of space. To avoid this problem, you should make sure that the log rotation is enabled. This can be set with the max-size option.

Distributed logging

Copy link

If you are running more than one instance of CKEditor AI On-Premises, we recommend using a distributed logging system. It allows you to view and analyze logs from all instances in one place.

AWS CloudWatch and other cloud solutions

Copy link

If you are running CKEditor AI On-Premises in the cloud, the simplest and recommended way is to use a service that is available from your selected provider. Here are some of the available services:

To use CloudWatch with AWS ECS, you have to create a log group first and change the log driver to awslogs. When the log driver is configured properly, logs will be streamed directly to CloudWatch.

The logConfiguration may look similar to this:

"logConfiguration": {
    "logDriver": "awslogs",
    "options": {
        "awslogs-region": "us-west-2",
        "awslogs-group": "cksource",
        "awslogs-stream-prefix": "ck-ai-service-logs"
    }
}
Copy code

Refer to the Using the awslogs Log Driver article for more information.

On-Premises solutions

Copy link

If you are using your own infrastructure or, for some reason, cannot use the service offered by your provider, you can always use some on-premises distributed logging system.

There are a lot of solutions available, including:

  • ELK + Filebeat
    This is a stack built on top of Elasticsearch, Logstash, and Kibana. In this configuration, Elasticsearch stores logs, Filebeat reads the logs from Docker and sends them to Elasticsearch, and Kibana is used to view them. Logstash is not necessary because the logs are already structured.

  • Fluentd
    It uses a dedicated Docker log driver to send the logs. It has a built-in frontend, but can also be integrated with Elasticsearch and Kibana for better filtering.

  • Graylog
    It uses a dedicated Docker log driver to send the logs. It has a built-in frontend and needs Elasticsearch to store the logs as well as a MongoDB database to store the configuration.

Example configuration

Copy link

The example configuration uses Fluentd, Elasticsearch, and Kibana to capture logs from Docker.

Before running CKEditor AI On-Premises, you have to prepare the logging services. For this example, Docker Compose is used. Create the fluentd, elasticsearch, and kibana services inside the docker-compose.yml file:

version: '3.7'
services:
    fluentd:
        build: ./fluentd
        volumes:
            - ./fluentd/fluent.conf:/fluentd/etc/fluent.conf
        ports:
            - "24224:24224"
            - "24224:24224/udp"

    elasticsearch:
        image: docker.elastic.co/elasticsearch/elasticsearch:6.8.5
        expose:
            - 9200
        ports:
            - "9200:9200"

    kibana:
        image: docker.elastic.co/kibana/kibana:6.8.5
        environment:
            ELASTICSEARCH_HOSTS: "http://elasticsearch:9200"
        ports:
            - "5601:5601"
Copy code

To integrate Fluentd with Elasticsearch, you first need to install fluent-plugin-elasticsearch in the Fluentd image. To do this, create a fluentd/Dockerfile with the following content:

FROM fluent/fluentd:v1.10-1

USER root

RUN apk add --no-cache --update build-base ruby-dev \
    && gem install fluent-plugin-elasticsearch \
    && gem sources --clear-all
Copy code

Next, configure the input server and connection to Elasticsearch in the fluentd/fluent.conf file:

<source>
    @type forward
    port 24224
    bind 0.0.0.0
</source>
<match *.**>
    @type copy
    <store>
        @type elasticsearch
        host elasticsearch
        port 9200
        logstash_format true
        logstash_prefix fluentd
        logstash_dateformat %Y%m%d
        include_tag_key true
        type_name access_log
        tag_key @log_name
        flush_interval 1s
    </store>
    <store>
        @type stdout
    </store>
</match>
Copy code

Now you are ready to run the services:

docker-compose up --build
Copy code

When the services are ready, you can finally start CKEditor AI On-Premises.

docker run --init -p 8080:8080 \
--log-driver=fluentd \
--log-opt fluentd-address=[Fluentd address]:24224 \
[Your config here] \
docker.cke-cs.com/ai-service:[version]
Copy code

Now open Kibana in your browser. It is available at http://localhost:5601/. In the first run, you may be asked about creating an index. Use the fluentd-* pattern and press the “Create” button. After this step, your logs should appear in the “Discover” tab.