Last updated: Oct 09, 2024
You can prompt foundation models in IBM watsonx.ai programmatically using the Python library.
Python library reference
Prerequisites
To run some of the samples in this topic, you need an IBM Cloud API key and a watsonx.ai project ID:
Examples
Example 1: List available models
You can view ModelTypes
to see available models.
Python code
from ibm_watson_machine_learning.foundation_models.utils.enums import ModelTypes
import json
print( json.dumps( ModelTypes._member_names_, indent=2 ) )
Sample output
[
"FLAN_T5_XXL",
"FLAN_UL2",
"MT0_XXL",
...
]
Example 2: View details of a model
You can view details, such as a short description and model limits, using get_details()
.
Python code
from ibm_watson_machine_learning.foundation_models.utils.enums import ModelTypes
from ibm_watson_machine_learning.foundation_models import Model
import json
my_credentials = {
"url" : "https://us-south.ml.cloud.ibm.com",
"apikey" : <my-IBM-Cloud-API-key>
}
model_id = ModelTypes.MPT_7B_INSTRUCT2
gen_parms = None
project_id = <my-watsonx.ai-project-ID>
space_id = None
verify = False
model = Model( model_id, my_credentials, gen_parms, project_id, space_id, verify )
model_details = model.get_details()
print( json.dumps( model_details, indent=2 ) )
Note:
Replace <my-IBM-Cloud-API-key>
and <my-watsonx.ai-project-ID>
with your API key and project ID.
Sample output
{
"model_id": "ibm/mpt-7b-instruct2",
"label": "mpt-7b-instruct2",
"provider": "IBM",
"source": "Hugging Face",
"short_description": "MPT-7B is a decoder-style transformer pretrained from
scratch on 1T tokens of English text and code. This model was trained by IBM.",
...
}
Example 3: Prompt a model with default parameters
Prompt a model to generate a response using generate()
.
Python code
from ibm_watson_machine_learning.foundation_models.utils.enums import ModelTypes
from ibm_watson_machine_learning.foundation_models import Model
import json
my_credentials = {
"url" : "https://us-south.ml.cloud.ibm.com",
"apikey" : <my-IBM-Cloud-API-key>
}
model_id = ModelTypes.FLAN_T5_XXL
gen_parms = None
project_id = <my-watsonx.ai-project-ID>
space_id = None
verify = False
model = Model( model_id, my_credentials, gen_parms, project_id, space_id, verify )
prompt_txt = "In today's sales meeting, we "
gen_parms_override = None
generated_response = model.generate( prompt_txt, gen_parms_override )
print( json.dumps( generated_response, indent=2 ) )
Note:
Replace <my-IBM-Cloud-API-key>
and <my-watsonx.ai-project-ID>
with your API key and project ID.
Sample output
{
"model_id": "google/flan-t5-xxl",
"created_at": "2023-07-27T03:40:17.575Z",
"results": [
{
"generated_text": "will discuss the new product line.",
"generated_token_count": 8,
"input_token_count": 10,
"stop_reason": "EOS_TOKEN"
}
],
...
}
Learn more
Parent topic: Foundation models