Using IBM Watson Machine Learning, you can deploy models, scripts, and functions, manage your deployments, and prepare your assets to put into production to generate predictions and insights.
This graphic illustrates a typical process for a machine learning model. After you build and train a machine learning model, use Watson Machine Learning to deploy the model, manage the input data, and put your machine learning assets to use.
IBM Watson Machine Learning architecture and services
Watson Machine Learning is a service on IBM Cloud with features for training and deploying machine learning models and neural networks. Built on a scalable, open source platform based on Kubernetes and Docker components, Watson Machine Learning enables you to build, train, deploy, and manage machine learning and deep learning models.
Deploying and managing models with Watson Machine Learning
Watson Machine Learning supports popular frameworks, including: TensorFlow, Scikit-Learn, and PyTorch to build and deploy models. For a list of supported frameworks, refer to Supported frameworks.
To build and train a model:
- Use one of the tools that are listed in Analyzing data and building models.
- Import a model that you built and trained outside of Watson Studio.
Deployment infrastructure
- Deploy trained models as a web service or for batch processing.
- Deploy Python functions to simplify AI solutions.
Programming Interfaces
- Use Python client library to work with all of your Watson Machine Learning assets in a notebook.
- Use REST API to call methods from the base URLs for the Watson Machine Learning API endpoints.
- When you call the API, use the URL and add the path for each method to form the complete API endpoint for your requests. For details on checking endpoints, refer to Looking up a deployment endpoint.
Parent topic: Deploying and managing models