Using CKEditor AI programmatically
CKEditor AI features are designed to work seamlessly through the built-in UI – but they can also be controlled entirely from code. This page covers two distinct approaches to using AI programmatically: the front-end editor API and the REST API.
CKEditor AI features can be triggered programmatically via the editor instance. This is useful for building custom UI, automating workflows, or integrating AI capabilities into your application logic beyond the built-in editor toolbar.
We are actively expanding the programmatic API for CKEditor AI. Some of the APIs described below are marked as experimental – they are production-ready but may change in minor releases without the standard deprecation policy. Breaking changes will always be documented in the changelog with migration guidance. If you have a use case that is not covered here, please contact us. Your feedback helps us prioritize which APIs to expose next.
All examples below assume the editor is already set up with AI features enabled. See the integration guide for setup instructions.
The AI Chat feature can be controlled via the AIChat plugin. See the chat documentation for more details on the feature.
The demo below shows a generic sales offer for server infrastructure. Select a target company, then click the button to send a personalized rewrite request to AI Chat – all from code, without any user interaction in the chat UI. The prompt includes the company’s profile data so the AI tailors the offer accordingly.
Select a company
Pick a company profile to personalize the offer, then click the button below.
IronPeak Systems — Server Infrastructure Offer
Dear Team,
We are pleased to present an infrastructure proposal tailored to your needs. IronPeak Systems delivers high-performance server racks and data center solutions designed to scale with your business.
Proposed Configuration
- Rack model: IronPeak R-Series 42U
- Compute: 8 × DualXeon blades (64 cores / 512 GB RAM each)
- Storage: 120 TB NVMe all-flash array
- Networking: Redundant 100 GbE top-of-rack switches
Pricing & Next Steps
The base configuration starts at $185,000 with volume discounts available for multi-rack deployments. We would love to schedule a technical deep-dive with your team. Please let us know your availability.
Best regards,
The IronPeak Systems Sales Team
Use the sendMessage() method to programmatically send a message to AI Chat. You can dynamically construct the message based on your application state – for example, including external data like a company profile:
const aiChatController = editor.plugins.get( 'AIChatController' );
await aiChatController.sendMessage( {
message: `Rewrite this offer for ${ companyName }.\n\nCompany profile:\n${ profileData }`
} );
Use the startConversation() method:
const aiChatController = editor.plugins.get( 'AIChatController' );
await aiChatController.startConversation();
Attach the current editor selection as context for the next chat message using addSelectionToChatContext():
const aiChatController = editor.plugins.get( 'AIChatController' );
aiChatController.addSelectionToChatContext();
The Quick Actions feature lets you trigger predefined AI actions programmatically via the AIActions plugin. See the quick actions documentation for the full list of available actions and configuration options.
The demo below shows a payment reminder email with hardcoded customer data. The editor is configured with merge fields for customer name, amount, due date, and other placeholders. Click the button to run a custom AI action that automatically replaces the hardcoded values with the appropriate merge field placeholders – built from the editor’s merge fields configuration.
Dear Sarah Johnson,
This is a friendly reminder that your upcoming payment of $2,400.00 is due on April 15, 2026.
Please ensure the funds are available in your account ending in 4821 before the due date to avoid any late fees. If you have already made this payment, please disregard this notice.
Here is a summary of your payment details:
- Amount due: $2,400.00
- Due date: April 15, 2026
- Account: ****4821
- Payment plan: Premium Monthly
If you have any questions or need to adjust your payment schedule, please don't hesitate to reach out to our billing team.
Best regards,
Billing Department
Quick actions operate on the current editor selection. If the selection is collapsed (no text is selected), the action automatically expands to the nearest block element.
Use the executeAction() method to run system actions or fully custom prompts:
const aiActions = editor.plugins.get( 'AIActions' );
// Run a system action.
await aiActions.executeAction(
{ actionName: 'improve-writing' },
'Improve writing'
);
// Or run a custom prompt (model is required for custom actions).
await aiActions.executeAction(
{ userMessage: 'Rewrite the selected text as a haiku', model: 'agent-1' },
'Make it a haiku'
);
The available system actionName values are defined by the AIActionsNames.
The same AI service that powers the editor features is also available as a REST API. You can call it from your frontend – using the editor’s authentication token – to build AI-powered features around the editor. This is especially useful for scenarios where you need AI capabilities outside the editor content area, such as auto-generating a title or meta description in separate form fields based on the editor content.
The demo below shows a form with a title and meta description field above the editor. Click the button to generate both fields from the editor content using the AI REST API.
Large language models have gone from a niche research topic to one of the most talked-about technologies in the world in just a few short years. What started as experiments in predicting the next word in a sentence has evolved into systems that can draft emails, summarize documents, write code, and even hold extended conversations.
From Research to Product
The journey began with transformer architectures introduced in 2017. Early models like GPT-2 demonstrated surprising fluency but were considered too unpredictable for production use. It wasn't until the release of larger, more refined models that businesses started to take notice. Today, language models are embedded in products used by millions of people every single day.
Challenges Ahead
Despite the rapid progress, significant challenges remain. Hallucinations — cases where models generate plausible but incorrect information — continue to be a concern. Energy consumption for training and running these models is substantial. Questions around bias, copyright, and data privacy are still being actively debated.
The next frontier likely involves making these models more efficient, more accurate, and better integrated into existing workflows rather than building ever-larger systems.
The AI REST API (https://ai.cke-cs.com) exposes three categories of endpoints:
- Actions – Stateless, single-purpose content transforms. Use these for operations like fixing grammar, improving writing, translating short sections, adjusting length or tone, or running custom prompts against content.
- Conversations – Multi-turn chat with conversation history, file uploads, and web search capabilities.
- Reviews – Document analysis for grammar, clarity, readability, and tone, returning specific suggestions for improvement. Also supports full-document translation, ensuring all text is translated even in longer content.
AI generation endpoints, such as Actions calls and Conversation message calls, return Server-Sent Events (SSE) streams. This means you cannot simply await response.json() for these responses – instead, you need to read the response stream and parse the individual events. Other REST API endpoints, such as the models endpoint, return regular JSON responses.
The following example shows how to call the AI Actions API from the browser and collect the streamed result:
// Get the editor content.
const html = editor.getData();
// Get the auth token from the editor's token provider.
const token = editor.plugins.get( 'CloudServices' ).token.value;
// Call the AI Actions API (system action: improve-writing).
const response = await fetch( 'https://ai.cke-cs.com/v1/actions/system/improve-writing/calls', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${ token }`
},
body: JSON.stringify( {
content: [
{
type: 'text',
content: html
}
]
} )
} );
// Read the SSE stream.
// Each SSE message has an "event:" line (e.g. "text-delta") and a "data:" line with JSON.
const reader = response.body.getReader();
const decoder = new TextDecoder();
let result = '';
let currentEvent = '';
while ( true ) {
const { done, value } = await reader.read();
if ( done ) {
break;
}
const chunk = decoder.decode( value, { stream: true } );
for ( const line of chunk.split( '\n' ) ) {
if ( line.startsWith( 'event: ' ) ) {
currentEvent = line.slice( 7 ).trim();
} else if ( line.startsWith( 'data: ' ) && currentEvent === 'text-delta' ) {
const data = JSON.parse( line.slice( 6 ) );
result += data.textDelta;
}
}
}
The example above is simplified for clarity. In production, handle errors, authentication token refresh, and edge cases in SSE parsing (such as events split across chunks).
For the complete API reference, including all available endpoints, request and response formats, streaming, and authentication details, see the AI REST API documentation.