Last updated: Dec 12, 2024
IBM watsonx.ai has REST APIs that support programmatic tasks for working with foundation models. These APIs are exercised in a Python library and Node.js package that you can use to leverage foundation models in your generative AI applications.
To find more resources that can help you with coding tasks, including sample code and communities where you can discuss tips and tricks and find answers to common questions, go to the watsonx Developer Hub.
Tasks that you can do programmatically
You can use the watsonx.ai REST API, Python library, or Node.js SDK to do the following tasks programmatically:
Task | Python | Node.js | REST API |
---|---|---|---|
Get details about the available foundation models | Get model specs | Example | List the supported foundation models |
Check the tokens a model calculates for a prompt | Tokenize built-in foundation models | Example | Text tokenization |
Get a list of available custom foundation models | Custom models | Retrieve the deployments Use the type=custom_foundation_model parameter. |
|
Inference a foundation model | Generate text | Example | Text generation |
Inference a deploy on demand foundation model | Generate text | Infer text | |
Configure AI guardrails when inferencing a foundation model | Removing harmful content | Use the moderations field to apply filters to foundation model input and output. See Infer text |
|
Chat with a foundation model | ModelInference.chat() | Example | Infer text |
Tool-calling from chat | ModelInference.chat() | Infer text | |
Prompt-tune a foundation model | See the documentation | Example | See the documentation |
Inference a tuned foundation model | Generate text | Example | Infer text |
List all prompt templates | List all prompt templates | Get a prompt template | |
List the deployed prompt templates | List deployed prompt templates | List the deployments (type=prompt_template) | |
Inference a foundation model by using a prompt template | Prompt Template Manager | Example | Infer text |
Vectorize text | Embed documents | Example | Text embedding |
Extract text from documents | Text Extractions | Text extraction | |
Rerank document passages | Rerank | Generate rerank | |
Forecast future values | TSModelInference | Time series forecast | |
Integrate with LangChain | IBM extension in LangChain | • Chat API • Foundation models • Embedding models |
|
Integrate with LlamaIndex | • IBM LLMs in LlamaIndex • IBM embeddings in LlamaIndex |
Learn more
- Credentials for programmatic access
- Finding the project ID
- Foundation model IDs
- Python library
- Node.js SDK
- REST API
- Vectorizing text
- Reranking document passages
- Extracting text from documents
- Adding generative chat function to your applications with the chat API
- Building agent-driven workflows with the chat API
- Forecasting future values
Parent topic: Developing generative AI solutions