0 / 0
Deploying models with Watson Machine Learning
Last updated: Oct 09, 2024
Deploying models with Watson Machine Learning

Using IBM Watson Machine Learning, you can deploy models, scripts, and functions, manage your deployments, and prepare your assets to put into production to generate predictions and insights.

This graphic illustrates a typical process for a machine learning model. After you build and train a machine learning model, use Watson Machine Learning to deploy the model, manage the input data, and put your machine learning assets to use.

Building a machine learning model

IBM Watson Machine Learning architecture and services

Watson Machine Learning is a service on IBM Cloud with features for training and deploying machine learning models and neural networks. Built on a scalable, open source platform based on Kubernetes and Docker components, Watson Machine Learning enables you to build, train, deploy, and manage machine learning and deep learning models.

Deploying and managing models with Watson Machine Learning

Watson Machine Learning supports popular frameworks, including: TensorFlow, Scikit-Learn, and PyTorch to build and deploy models. For a list of supported frameworks, refer to Supported frameworks.

To build and train a model:

Deployment infrastructure

Programming Interfaces

  • Use Python client library to work with all of your Watson Machine Learning assets in a notebook.
  • Use REST API to call methods from the base URLs for the Watson Machine Learning API endpoints.
  • When you call the API, use the URL and add the path for each method to form the complete API endpoint for your requests. For details on checking endpoints, refer to Looking up a deployment endpoint.

Parent topic: Deploying and managing models

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more