LLM Service

The LLM service transforms AI into production-ready agents. The service can be accessed via code points for supported AI models. By using the LLM service, points engage AI models for activities in the servicing of tasks.

How it Works

The LLM service is accessed via preconfigured code points available to all organizations. Following are the steps to use the LLM service.

Import Points

Import llm_service and get_llm_json.

{
    "imports": [
        "llm_service",
        "get_llm_json"
    ]
}

Set Parameters (optional)

The LLM service supports the following parameters:

  • prompt: The input text or question you want the AI model to respond to. Multiple prompts can be shared in an array structure as [{ role: 'ROLE', content: 'PROMPT'}], where role can be system, user, or assistant.
  • max_tokens: The maximum number of tokens that the model is allowed to generate in the response. Default is 2048.
  • temperature: Controls the randomness of the response. Lower values make the output more deterministic. Default is 0.6.
  • model: Specifies the supported AI model to use for generating the response. If not defined, it will select the current default model.

Call Service in Point

Below is an example of calling the LLM service in a code point and parsing the response using the get_llm_json service.

// Define parameters
const params = {
    prompt: [
        { role: 'system', content: 'You are a helpful assistant.' },
        { role: 'user', content: 'What is the weather like today?' }
    ],
    max_tokens: 150,
    temperature: 0.7,
    model: 'gpt-3.5-turbo'
};

// Call LLM service
const response = await llm_service(params);
if (response?.reason !== 'stop' || !response?.result) {
    throw new Error(response?.reason ?? 'No response from LLM');
}

// Add the assistant's response to the prompt array
params.prompt.push({ role: 'assistant', content: response.result });

// Parse the response
const parsedResult = await get_llm_json({ prompt: params.prompt, last_response: response.result });

What’s Next