0 / 0
Prompting a custom foundation model
Last updated: Sep 03, 2024
Prompting a custom foundation model

After a custom foundation model is deployed, use the capabilities of watsonx.ai to prompt the model.

In Prompt Lab, find your custom model in the list of available foundation models. You can then work with the model as you do with foundation models that are provided with watsonx.ai. The simplest way to find your deployed models is to use the filter:

Finding your custom model by using the filter

After selecting the model, you can:

  • Use the Prompt Lab to create prompts and review prompts for your custom foundation model

Prompting a custom model by using the API

Refer to these examples to code a prompt for the custom foundation model:

curl -X POST "https://<your cloud hostname>/ml/v1/deployments/<your deployment ID>/text/generation?version=2024-01-29" \
-H "Authorization: Bearer $TOKEN" \
-H "content-type: application/json" \
--data '{
 "input": "Hello, what is your name",
 "parameters": {
    "max_new_tokens": 200,
    "min_new_tokens": 20
 }
}'
curl -X POST "https://<your cloud hostname>/ml/v1/deployments/<your deployment ID>/text/generation_stream?version=2024-01-29" \
-H "Authorization: Bearer $TOKEN" \
-H "content-type: application/json" \
--data '{
 "input": "Hello, what is your name",
 "parameters": {
    "max_new_tokens": 200,
    "min_new_tokens": 20
 }
}'

Parent topic: Deploying custom foundation models

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more