Skip to main content

CWCloud AI

Purpose​

This feature aims to expose AI1 models such as LLM2 to be exposed as a normalized endpoint.

Here's a quick demo of what you can achieve with this API:

demo_cwai

Enabling this API​

In the SaaS version, you can ask to be granted using the support system.

If you're admin of the instance, you can grant users like this:

cwai_enable

UI chat​

Once you're enabled, you can try the CWAI api using this chat web UI:

cwai_chat

Use the API​

Of course, the main purpose is to be able to interact with those adapteurs using very simple http endpoints:

cwai_api

Here's how to get all the available adapters:

curl -X 'GET' 'https://api.cwcloud.tech/v1/ai/adapters' -H 'accept: application/json' -H 'X-Auth-Token: XXXXXX'

Result:

{
"adapters": [
"openmistral",
"mistral",
"claude3",
"deepseek",
"gpt4o",
"gpt4omini",
"gemini",
"log"
],
"status": "ok"
}

Then prompting with one of the available adapters:

curl -X 'POST' \
'https://api.cwcloud.tech/v1/ai/prompt' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-H 'X-Auth-Token: XXXXXX' \
-d '{
"adapter": "gpt4o",
"message": "Hey",
"settings": {}
}'

The answer would be:

{
"response": [
"Hello! How can I assist you today with cloud automation or deployment using CWCloud?"
],
"status": "ok"
}

Notes:

  • you have to replace the XXXXXX value with your own token generated with this procedure.
  • you can replace https://api.cwcloud.tech by the API's instance URL you're using, with the CWAI_API_URL environment variable. For the Tunisian customers for example, it would be https://api.cwcloud.tn.
  • it's possible to pass multiple prompts associated with roles (system and user) like this:
curl -X 'POST' \
'https://api.cwcloud.tech/v1/ai/prompt' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-H 'X-Auth-Token: XXXXXX' \
-d '{
"adapter": "gpt4o",
"messages": [
{
"role": "system",
"message": "You're a tech assistant"
},
{
"role": "user",
"message": "I need help"
}
],
"settings": {}
}'

Use the CLI​

You can use the cwc CLI which provide a subcommand ai:

cwc ai
This command lets you call the CWAI endpoints

Usage:
cwc ai
cwc ai [command]

Available Commands:
adapters Get the available adapters
prompt Send a prompt

Flags:
-h, --help help for ai

Use "cwc ai [command] --help" for more information about a command.

List the available adapters​

cwc ai adapters
openmistral
mistral
claude3
deepsick
gpt4o
gpt4omini
log

Send a prompt to an available adapter​

$ cwc ai prompt
Error: required flag(s) "adapter", "message" not set
Usage:
cwc ai prompt [flags]
cwc ai prompt [command]

Available Commands:
details Get details about a prompt list
history Get prompt history

Flags:
-a, --adapter string The chosen adapter
-h, --help help for prompt
-l, --list string Optional list ID
-m, --message string The message input
-p, --pretty Pretty print the output (optional)

Use "cwc ai prompt [command] --help" for more information about a command.
$ cwc ai prompt --adapter gpt4o --message "Hey"
AI response:
➀ Status: ok
➀ Response: Hello! How can I assist you today?
➀ ListId: 058a8d18-038c-4f09-9b98-20b93076dbe5

You can pass the ListId value as arguments for your next prompts if you want to keep them in the same conversation:

$ cwc ai prompt --adapter gpt4o --message "Hey" -l 058a8d18-038c-4f09-9b98-20b93076dbe5

Using the FaaS engine​

You can use the FaaS/low code engine to expose an AI assistant as a service like this:

cwai_faas_blockly

Then invoke you're FaaS function as usual:

cwai_faas_invoke_cli

Adapter interface​

This section is for contributors who wants to add new adapters.

You can implement you're own adapter that will load and generate answer from adapters implementing this abstract1:

class ModelAdapter(ABC):
@abstractmethod
def generate_response(self, prompt: Prompt):
pass

The you have to update this list2 with your new adapter:

_default_adapters = [
'openmistral',
'mistral',
'claude3',
'gpt4o',
'gpt4omini',
'log'
]

Footnotes​

  1. Artificial intelligence ↩ ↩2

  2. Large language model ↩ ↩2