Create a new AI Assistant Retrieval agent
If you have a single data source that you want to connect to your application as its context, and you need the application to retrieve information from that source, this is the ideal application type for you. This AI Assistant Retrieval application is designed to work quickly and efficiently.
This step-by-step guide will walk you through the process of creating a chat application and seamlessly embedding it directly into your platform.
Using the UI
Step 1: Create your application
First things first, let’s create your chat application:
- Hit the Create button in the Applications screen to start the process.
- Enter a meaningful name for your chat application that reflects its purpose or the service it will provide.
- Select the application type "AI Assistance retrieval"

Step 2: Connect an LLM
To activate your chat application, it's crucial to connect it to an LLM. Your application will not function until this step is completed:
- Select your desired model provider from the following options:
- OpenAI
- GoogleAI
- OpenAI Compatible
- Anthropic
- Depending on your chosen provider, complete the following:
- Choose your preferred model and version from the available list if using OpenAI or GoogleAI.
- Provide the API key or connection details necessary to establish a link between Superwise and the LLM model.
- Test your application connection

Good to know
Utilize the Playground to immediately test the chat application with the connected LLM.
Step 3: Integrate Context into Your Application
For this application type to function optimally, it's necessary to incorporate additional context. The application will retrieve information from this supplementary context to enhance its performance.
Available context types:
Step 4: Save & Publish
You're just a step away from launching your chat application:
- Click the"Save & Publish" button to finalize and go live.
- Post-publishing, the application is prepared for embedding within your platform
Additional configuration options
- Add a prompt: Customizing the initial prompt enables the assistant to perform better by providing context and setting the direction of the conversation. You can read more about Prompt engineering guidelines here.
Using the SDK
Prerequisites
To install and properly configure the SUPERWISE® SDK, please visit the SDK guide.
Step 1: Create model
Select a model that will form the cornerstone of your application.
from superwise_api.models.application.application import OpenAIModel, OpenAIModelVersion
llm_model = OpenAIModel(version=OpenAIModelVersion.GPT_4, api_token="Open API token")
If you want to see what models are available from the provider and their versions, please use the following code:
List current supported external providers
from superwise_api.models.application.application import ModelProvider
[provider.value for provider in ModelProvider]
List current supported model versions
from superwise_api.models.application.application import GoogleModelVersion, OpenAIModelVersion, AnthropicModelVersion
display([model_version.value for model_version in GoogleModelVersion])
display([model_version.value for model_version in OpenAIModelVersion])
display([model_version.value for model_version in AnthropicModelVersion])
Step 2: Create context
from superwise_api.models.tool.tool import ToolConfigSQLDatabasePostgres
from superwise_api.models.context.context import ContextDef
db_context = ContextDef(name="My DB", config=ToolConfigSQLDatabasePostgres(connection_string="My connection string"))
Step 3: Create application
from superwise_api.models.application.application import AIAssistantConfig
app = sw.application.create(name="application name", additional_config=AIAssistantConfig(context=db_context) ,llm_model=llm_model, prompt=None)
Modify model parameters
In the context of Large Language Models (LLMs), parameters are the adjustable factors that influence how the model generates text and makes decisions based on given inputs.
- Temperature: Controls the randomness of the model’s outputs; lower values produce more deterministic responses, while higher values increase variety.
- OpenAI: The temperature value can range between 0 and 2, with a default value of 0.
- GoogleAI: The temperature value can range between 0 and 1, with a default value of 0.
- Top-p (Nucleus Sampling): Limits the model’s output options to a subset of the highest-probability tokens that collectively account for a probability mass of p, ensuring more coherent text generation.
- The top-p value can range between 0 and 1, with a default value of 1.
- Top-k Sampling: Restricts the model to sampling from the top k most probable tokens, reducing the likelihood of selecting less relevant words.
- The top-k value can be any positive integer, with a default value of 40.
Model parameters availability
Please note that parameters are available only on GoogleAI and OpenAI models, and currently, they are accessible exclusively through the SDK. Additionally, the top-k parameter is exclusively available on GoogleAI models.
from superwise_api.models.application.application import GoogleModel, GoogleModelVersion, GoogleParameters
model = GoogleModel(version=GoogleModelVersion.GEMINI_1_5, api_token="Add your API token", parameters=GoogleParameters(temperature=1,top_p=0.9,top_k=30))
app = sw.application.put(str(app.id), additional_config=AIAssistantConfig(context=db_context) ,llm_model=model, prompt=None, name="Application name")
SQL DB context
By using the SQL DB context option, you can connect your database and provide the model with valuable context, improving the quality of interactions.
Here’s how you can do it:
Database support and configuration
Supported DB's
Ensure that your database is one of the following supported types:
- PostgreSQL (dialect:
postgresql
)- MySQL and MariaDB (dialect:
mysql
)- Oracle (dialect:
oracle
)- Microsoft SQL Server (dialect:
mssql
)- BigQuery
Password limitations
Please note that passwords should not contain the characters
@
and:
. If your password includes these characters, you will need to modify them in your connection string as follows:
- Replace
@
with%40
. For example:dialect://admin:My%[email protected]/dbtest
- Replace
:
with%3A
. For example:dialect://admin:My%[email protected]/dbtest
Database Query Limit Notice
To maintain optimal performance, each query is currently capped at a limit of 100 entries.
Postgres table schema
Please ensure that all column names are in lowercase with no capital letters.
Using the UI
Step 1: Add the Context
Click on the "+ Cool" button, and then SQL DB tool" start setting up the database connection.
Step 2: Configure the Context
- Assign a meaningful name to the tool.
- Database Selection and Connection: Identify and connect to your chosen database by providing the requisite details:
- For BigQuery:
- Specify your
BigQuery project
. - Input the
BigQuery dataset
you intend to query. - Input your
GCP service account
information for credential validation (in a JSON format).
- Specify your
- For Other Database Types:
- Construct your Database URL in the format:
dialect://username:password@host:port/database
. Ensure you replace the placeholders with actual values specific to your database credentials and connection information.
- Construct your Database URL in the format:
- For BigQuery:
- Select specific tables for your Context: To enhance application performance and control access to specific tables in your database, you can configure your application to access only selected tables. You have the flexibility to choose which tables your tool can access, grant access to all tables if needed, and enable automatic inclusion of any new tables added to the database.

Using the SDK
You can find the complete creation flow of the "AI Assistant Retrieval" application here. This SDK code snippet pertains to the creation of the SQL DB context within this flow.
from superwise_api.models.tool.tool import ToolConfigSQLDatabasePostgres
from superwise_api.models.context.context import ContextDef
db_context = ContextDef(name="My DB", config=ToolConfigSQLDatabasePostgres(connection_string="POSTGRESS DB CONNECTION STRING"))
Vector DB context
VectorDB enables you to generate embeddings for your content, such as websites or extended texts, and store them in your dedicated VectorDB. Once stored, SUPERWISE® facilitates the retrieval of data from these embeddings, allowing your application's model to utilize it for additional context. Currently, SUPERWISE® offers support for pgvector (PostgreSQL) and Pinecone. Learn more about the tool concept here
Pay attention
Please note that to achieve this successfully, you must use the same embedding model in SUPERWISE® that you used to store the data.
PostgreSQL limitations
- Password limitations: Please note that passwords should not contain the characters
@
and:
. If your password includes these characters, you will need to modify them in your connection string as follows:
- Replace
@
with%40
. For example:dialect://admin:My%[email protected]/dbtest
- Replace
:
with%3A
. For example:dialect://admin:My%[email protected]/dbtest
- Table schema limitation: Please ensure that all column names are in lowercase with no capital letters.
PGvector Prerequisite: Setting up vectorDB for Superwise integration
Before you begin, ensure your database meets the following requirements:
When connecting Postgres vectorDB to the Superwise application, the following tables are required in the database:
langchain_pg_collection
This table is used to save all the collections of documents (referred to as a "table" in the Superwise platform).
DDL:
CREATE TABLE public.langchain_pg_collection (
name varchar NULL,
cmetadata json NULL,
uuid uuid NOT NULL,
CONSTRAINT langchain_pg_collection_pkey PRIMARY KEY (uuid)
);
Columns explanation:
- name: The name of the collection (this is the table_name when creating the tool).
- cmetadata: Metadata for the collection.
- uuid: The ID of the collection.
langchain_pg_embedding
This table is connected to the langchain_pg_collection
table and stores documents along with their embeddings.
DDL:
CREATE TABLE public.langchain_pg_embedding (
collection_id uuid NULL,
embedding public.vector NULL,
document varchar NULL,
cmetadata json NULL,
custom_id varchar NULL,
uuid uuid NOT NULL,
CONSTRAINT langchain_pg_embedding_pkey PRIMARY KEY (uuid)
);
ALTER TABLE public.langchain_pg_embedding
ADD CONSTRAINT langchain_pg_embedding_collection_id_fkey
FOREIGN KEY (collection_id)
REFERENCES public.langchain_pg_collection(uuid)
ON DELETE CASCADE;
Columns explanation:
- collection_id: The ID of the collection the document is connected to.
- document: The text document.
- embedding: Embedding of the document.
- cmetadata: Metadata for the embedding (to enable cite sources, it should contain the source information like this: {"source": "https://js.langchain.com/docs/modules/memory").
- custom_id: User-defined custom ID.
- uuid: The ID of the document embedding.
Using the UI
This guide will walk you through creating a VectorDB context using the user interface client. VectorDB context help connect your system to a database containing vector embeddings, which can be used to enrich prompts and improve model understanding.
- Add a New Context:
- Click the "+ context" button. This opens a menu where you can choose the type of context you want to add.
- Select "VectorDB" to begin setting up the connection.
- Name Your context: Assign a descriptive name to your context.
- Choose the VectorDB Type: This refers to the specific type of database technology used for your VectorDB. If you're unsure, consult your system administrator.
- Connect to Your Database: Enter the necessary connection details to connect to your VectorDB instance. The required details vary depending on the specific VectorDB you are using:
- Pgvector:
- Provide the connection string in the following format:
postgresql://username:password@host:port/database
- Enter schema name (Optional)
- Enter the table name
- Provide the connection string in the following format:
- Pinecone:
- Enter your Pinecone API key
- Provide the Index name
- Pgvector:
- Link your embedding model : Provide information about the specific model here. Please pay attention that to achieve this successfully, you must use the same embedding model in SUPERWISE® that you used to store the data.

Using the SDK
You can find the complete creation flow of the "AI Assistant Retrieval" application here. This SDK code snippet pertains to the creation of the VectorDB context within this flow.
PGVector VectorDB
from superwise_api.models.tool.tool import OpenAIEmbeddingModel, OpenAIEmbeddingModelVersion, ToolConfigPGVector
from superwise_api.models.context.context import ContextDef
vector_context = ContextDef(name="Context name", config=ToolConfigPGVector(
connection_string="Connection string",
table_name="Table name",
db_schema="Schema",
embedding_model=OpenAIEmbeddingModel(version=OpenAIEmbeddingModelVersion.TEXT_EMBEDDING_ADA_002, api_key="API KEY")
)
)
Pinecone VectorDB
from superwise_api.models.tool.tool import OpenAIEmbeddingModel, OpenAIEmbeddingModelVersion, ToolConfigPineconeVectorDB
from superwise_api.models.context.context import ContextDef
vector_context =ContextDef(name="", config=ToolConfigPineconeVectorDB(
api_key="pinecone api key",
index_name="test",
embedding_model=OpenAIEmbeddingModel(version=OpenAIEmbeddingModelVersion.TEXT_EMBEDDING_ADA_002, api_key="Open AI Key")
)
)
Knowledge context
SUPERWISE® now enables you to boost your application by creating a connect your pre-created knowledge base within Superwise application. For setup and indexing guidance, refer to pre-created Knowledge. This integration enhances your application's responsiveness and enriches user experience
Using the UI
Step 1: Add the Knowledge Context
Click on the "+ Context" button, and then SW Knowledge"
Step 2: Configure the Context
- Assign a meaningful name to the context.
- Select your pre created knowledge

Using the SDK
You can find the complete creation flow of the "AI Assistant Retrieval" application here. This SDK code snippet pertains to the creation of the Knowledge context within this flow.
from superwise_api.models.tool.tool import ToolConfigKnowledge, OpenAIEmbeddingModel, OpenAIEmbeddingModelVersion, UrlKnowledgeMetadata
from superwise_api.models.context.context import ContextDef
knowledge_context = ContextDef(
name='Context name',
config=ToolConfigKnowledge(
knowledge_id='Knowledge ID',
knowledge_metadata=UrlKnowledgeMetadata(url='URL', max_depth='MAX_DEPTH'),
embedding_model=OpenAIEmbeddingModel(version=OpenAIEmbeddingModelVersion.TEXT_EMBEDDING_ADA_002, api_key="Open AI key")
)
)
Updated 16 days ago