Build a new Gen AI app
Get started quickly with our builder application setup using this simple guide. The guide will walk you through the creation of a new generative application that can be embedded into your webpage using iframe or plain REST API (using SWE SDK).
Creating an app can be done using the SWE UI console or using REST APIs or SDK.
Using the UI:
Creating an app requires a few simple steps:
- Create an app - In the Application page, press the βCreateβ button. Give it a name, and you will be directed to the Application creation studio.
- The first step to configure your app is to configure the underlying LLM model. Once you configure your model, the application playground will be enabled, in which you can experiment with your application interaction. To read more on how to configure a private deployed LLM please visit the Register a self-hosted model guide.
- Once your application is ready to go, you can βPublishβ it, and then the application will be ready to be embedded. To read more on how to embed your application, please visit the Embed your app guide.
To read more on how to configure and further fine-tune your app, visit our Build an application set of guides..

Using the SDK
Prerequisites
To install and properly configure the SWE SDK, please visit the SDK guide.
Step 1: Create model
Select a model that will form the cornerstone of your application.
from superwise_api.models.application.schemas import ModelLLM, ModelProvider, OpenAIModelVersion
model = ModelLLM(provider=ModelProvider.OPENAI, version=OpenAIModelVersion.GPT_4,
api_token="Your API token")
If you want to see what models are available from the provider and their versions, please use the following code:
List current supported external providers
from superwise_api.models import ModelProvider
[provider.value for provider in ModelProvider]
List current supported model versions
from superwise_api.client import GoogleModelVersion, OpenAIModelVersion
display([model_version.value for model_version in GoogleModelVersion])
display([model_version.value for model_version in OpenAIModelVersion])
Step 2: Create application
Create a new application with an LLM model
from superwise_api.models.application.schemas import ApplicationConfigPayload
application_config_payload = ApplicationConfigPayload(model=None, prompt="Your prompt", tools=[], name="Application name")
app = created_sw.application.create(application_config_payload)
You can also skip Step 1 by creating an empty application, and then update it by adding the model, tool, prompt, or update any other properties you desire in the same manner.
Create empty application
from superwise_api.models.application.schemas import ApplicationConfigPayload
application_config_payload = ApplicationConfigPayload(model=None, prompt=None, tools=[], name="My Application")
app = sw.application.create(application_config_payload)
Add model to the application
from superwise_api.models.application.schemas import ApplicationConfigPayload, ModelLLM, ModelProvider, \
OpenAIModelVersion
model = ModelLLM(provider=ModelProvider.OPENAI, version=OpenAIModelVersion.GPT_4, api_token="Your API token")
update_app_payload = ApplicationConfigPayload(model=model, prompt=None, tools=[], name="My Application")
app = sw.application.put(str(app.id), update_app_payload)
Test model connection
try:
sw.application.test_model_connection(model)
except Exception as e:
print("Model test connection failed!")
print("Model test connection success!")
Step 3: Experiment with your app using the Playground api
You can use the playground API to test your application's response. The playground section and APIs do not directly use the published application but rather internally test your existing app configuration.
from superwise_api.models import AskRequestPayload,ApplicationConfig
ask_request_payload = AskRequestPayload(
input="Your question",
chat_history=[],
config=ApplicationConfig(
model=model, prompt="Your prompt", tools=[]
)
)
answer = sw.application.ask(ask_request_payload)
Step 4: Embed and use your app
If you are satisfied with the configured app, you can go ahead and embed it either as an iframe or using our APIs (coming soon). Simply submit the following request:
import requests
endpoint_url = f"{app.url}v1/ask"
payload = {
"chat_history": [],
"input": "When was the Eiffel Tower built?"
}
resp = requests.post(endpoint_url, json=payload)
print(resp.json())

Updated about 1 year ago