The Decision Optimization experiment UI has different views in which you can select data, create models, solve different scenarios, and visualize the results.
Quick links to sections:
- Add a Machine Learning service to your project. You can either add this service at the project level (see Creating a Watson Machine Learning Service instance), or you can add it when you first create a new Decision Optimization experiment: click Add a Machine Learning service, select, or create a New service, click Associate, then close the window.
- Associate a deployment space with your Decision Optimization experiment (see Deployment spaces). A deployment space can be created or selected when you first create a new Decision Optimization experiment: click Create a deployment space, enter a name for your deployment space, and click Create. For existing models, you can also create, or select a space in the Overview information pane.
When you add a Decision Optimization experiment as an asset in your project, you open the Decision Optimization experiment UI.
With the Decision Optimization experiment UI, you can create and solve prescriptive optimization models that focus on the specific business problem that you want to solve. To edit and solve models, you must have Admin or Editor roles in the project. Viewers of shared projects can only see experiments, but cannot modify or run them.
You can create a Decision
Optimization model from scratch by entering a name or by
choosing a .zip
file, and then selecting Create. Scenario 1
opens.
With the Decision Optimization experiment UI, you can create several scenarios, with different data sets and optimization models. Thus, you, can create and compare different scenarios and see what impact changes can have on a problem.
For a step-by-step guide to build, solve and deploy a Decision Optimization model, by using the user interface, see the Quick start tutorial with video.
Overview
The overview tab provides a summary of information about all your scenarios. (For more information about scenarios, see Scenario pane). This summary is useful when you have several scenarios, as it gives you model, data an impact changed solution information for all your scenarios at a glance. It also shows whether your scenario uses the default environment set for that type of model or if it uses a different environment for that particular scenario. For more information, see Selecting a different run environment for a particular scenario.
- Create a scenario.
- Duplicate a scenario.
- Rename a scenario.
- Run a scenario.
- Export the scenario as a
.zip
file. - Generate a Python notebook from a scenario.
- Save the scenario as a model for deployment. (The data types set in the Prepare data view and any run configuration parameters that you might have set for that scenario are also saved in the deployment.)
- Delete a scenario.
In this view when you click the information icon , the information pane opens showing you details about your experiment and the name of your associated deployment space. Here you can create a Machine Learning service and even add this service to your project if you haven't already done so. You can also create or choose a deployment space for your experiment so that you can use a different space for a particular solve. The creation date and name of the experiment creator is also provided here. This information is useful if you are sharing an experiment created by another collaborator.
The information pane also has an Environment tab. Here you can see the default run environment that is used for the solve when you click Run in the Build model view. The environment depends on your model type. Modeling Assistant models require Python environments. See Hardware and software configuration.
You can run or delete multiple scenarios from this Overview by selecting them and clicking Run or Delete. These buttons are only visible when a selection is made. If one or more scenarios in your selection cannot be run (for example because no environment has been created) the Run button is unavailable. However, a tooltip provides you with information about why the scenario cannot be run. You can also stop a run from the Overview pane by clicking the stop button that appears while the scenario is running.
You can also configure this Overview pane by clicking the Settings icon . This action opens a pane where you can select the columns that you want to display in your Overview pane. You can add engine settings as a column for OPL models, and in this case the value yes will appear in the table. If you click this value, the engine settings are displayed.
Hardware and software configuration
When you use the experiment UI, the necessary environments are created for you automatically. However, you can configure the environment to be used for your solve, by changing the default environment. This environment will then be applied to all scenarios in your experiment. The environment depends on your model type: Python, OPL, CPLEX, CPO, or Modeling Assistant (which uses Python environments). For example, to change the default Python environment for DOcplex and Modeling Assistant models see Configuring environments and adding Python extensions. It also shows you how to select a different run environment for a particular scenario, without changing the default for all the other scenarios.
The Decision Optimization environment currently supports Python 3.10. The default version is Python 3.10.
For each of the following views, you can organize your screen as full-screen or as a split-screen. To do so, hover over one of the view tabs (Prepare data, Build model, Explore solution) for a second or two. A menu then appears where you can select Full Screen, Left or Right. For example, if you choose Left for the Prepare data view, and then choose Right for the Explore solution view, you can see both these views on the same screen.
Prepare data view
When you create a new Decision Optimization experiment in your project, the Prepare data view opens. In this view you can browse and import data sets, including connected data, that you already have in your Project. You can also choose to add data that you want to add to your project. Click add data and then Browse in the data pane that opens. Browse and select your files and click open to add them. When you add a data set in this way, it appears listed in the Prepare data view and also in the Data assets listed in your project.
Select the files that you want to import to your Scenario and click Import. You can
import files in most formats, including .csv
, .xls
,
.json
files, and connected data. If you are using Excel files with multiple sheets,
only the first sheet will be imported. However, you can export each sheet as a .csv
file to import your data into your Decision
Optimization
experiment.
.cvs
file contains any malicious payload (formulas for example) in
an input field, these items might be executed.Subsequently, if you modify, replace or delete a data set in your Project, these actions will have no impact on your scenario, unless you choose to import it into your scenario. Similarly, if you re-upload a new version of a table using the add data button in the Prepare data view, your scenario is not affected, unless you choose to import it into your scenario.
- Rename or delete a table.
- Edit the data directly in a table. You can scroll the table to see more rows (or Open the table in full mode to see the whole table and edit it in a new window).
- Rename column names.
- Resize columns.
- Change the data type (number or string) of a column. (These types are used when you save your scenario as a model for deployment.)
- Add or remove rows.
- Search and filter table values. See Table search and filtering.
- Sort tables.
- Export tables to project.
- Run the model.
If you re-import a file at any time, you can choose to import it with a new name. This renaming can be useful if you want to use different versions of the same data table. You can also choose to update and overwrite the current table in your Scenario. If you choose to re-import and update a table, a notification message will appear to remind you of which tables have been overwritten.
Changes that you make in the Prepare data view will be saved in your scenario, but not in the project data assets, unless you export the table to your project. Similarly, if you modify the project data assets, unless you import these changes into your scenario, they will not appear in the Prepare data view.
To export a table to your project: click the three dots and select Export to project. A new window opens where you can enter a file name and choose to create a new project data asset or overwrite an existing one. If you choose to overwrite a connected data file, the table in the connection will be updated as well.
For an example that includes exporting tables see the CopyAndSolveScenarios notebook in the Jupyter folder of the DO-samples in the Decision Optimization GitHub.
You can access your imported data from your Python DOcplex model by using the syntax
inputs['tablename']
. See Input and output data.
Build model view
When you click Build model in the sidebar for the first time, a window appears where you can choose how you want to formulate your model. You can choose to use the assisted mode with the Modeling Assistant, or create or import a model in Python, OPL, LP (CPLEX), or CPO code.
In this view, you can formulate, or import, optimization models and run them.
You have several options to create a model:
- Create and edit a Python or OPL model in the Decision Optimization experiment UI. See OPL models.
- Use the Modeling Assistant to formulate models in natural language. See Formulating and running a model: house construction scheduling for a tutorial on formulating models with the Modeling Assistant.
- Import and edit a Python optimization model from an existing notebook. Use this option to import a notebook from your project. If your notebook is running on a Jupyter customized environment (see Adding a customization), when you import the notebook into the experiment UI, you also import this environment definition. Thus, you can use additional Python libraries when you run models from the experiment UI. This custom software definition will also be used when you deploy your model in Watson Machine Learning (both when you save your model for deployment and when you promote it to your deployment space).
- Import and edit a Python optimization model from an external file. Use this option to import a Python file from your local computer.
- Import and edit an OPL model from a file.
- Import and edit a CPLEX model from a file.
- Import and scenario.zip file (that contains both model and data). This file can be a new scenario or one that you have previously exported from the Decision Optimization experiment UI and edited locally.
- Generate a Python model from your current scenario (Python and Modeling Assistant models only). This creates a Python notebook optimization model in your project.
When you edit your model formulation in the Decision Optimization experiment UI your content is saved automatically, and the Last saved time is displayed.
When you have created a model, the Replace arrow appears. If you click this Replace arrow, you return to the Model wizard. Note that if you create a new model, the previous one is deleted.
When you have finished editing your model, you can solve it by clicking the Run button in this view.
Multiple model files
You can create a Python or OPL models using multiple model files, by clicking the + tab next to MODEL, and selecting Add new empty or Upload Files (to add any type of file). The MODEL tab must always contain your main model. If you try to upload another file with the same name, for example model.py, you are prompted to upload it with new name or replace your main model. You can also replace a model by clicking the Import icon. See the Multifile example in the Model_Builder folder of the DO-samples.
Run models
To run models, you must associate a Watson Machine Learning instance with your Project and associate a deployment space with your Decision Optimization experiment. You must also have the Editor or Admin role in the deployment space.
When
you run a model from the Decision
Optimization
experiment UI, the do_22.1
runtime is used by default.
You can view and change this CPLEX runtime and your Python environment in the experiment Overview by opening the Environment tab of the Information pane, and selecting one of the available environments for your type of model (Python, OPL, CPLEX, CPO). Python is used to run Decision Optimization models that are formulated in DOcplex in both Decision Optimization experiments and Jupyter notebooks. Modeling Assistant models also use Python because DOcplex code is generated when models are deployed.
You can also set and modify certain optimization parameters by clicking the Configure run icon next to the Run button. These parameters will be then applied each time that you click Run. For more information, see Run configuration.
During the run, a graphical display shows
the feasible solutions that are obtained until the optimal solution is found. If you have set the
intermediate solution delivery parameter in the run configuration to a
certain frequency, a sample of intermediate solutions are displayed with that frequency. To see
these intermediate solutions, you must click New data available. A maximum of
3 intermediate solutions are displayed at a time. You can use the tabs to see Engine
statistics,, KPIs, the Log file, and you
can see the solution tables of the last sampled solution in the Solution
assets tab. To obtain intermediate solutions for Python DOcplex models, you must
implement a specific callback in your model. See the IntermediateSolutions
sample
in the Model_Builder folder of the DO-samples in the Decision Optimization GitHub. Select the relevant
product and version subfolder.
Run configuration
When you click the Configure run icon next to the Run button in the Build model view, a window opens showing you the currently set parameter values.
Here you can select and edit different run configuration parameters. For more information, see Run parameters.
After you set the run configuration parameters, they will be used with those values for all subsequent runs for that scenario.
You can remove set parameters by hovering over the parameter and clicking the Remove icon.
The Environment tab in this pane shows you the default run environment that is being used for your experiment.
When you solve a model by clicking Run, this default environment is used or, if it doesn’t exist, it is created automatically. The type of environment that is used depends on your model type (Python, OPL, CPLEX, CPO, Modeling Assistant). For more information, see Environment tab in Overview information pane. You can also configure your environments.
Explore solution view
When your run completes successfully, the solution is displayed in one or several tables, or as a file for CPLEX and CPO models, in the Explore solution view.
The Results section contains several tabs. The first tab shows the Objectives and KPIs. The Solutions tables tab shows the resulting (best) values for the decision variables. These solution tables are automatically displayed in alphabetical order. Note that these solution tables are not editable but can be filtered. See Table search and filtering. You can download both the objectives and solution tables. For CPLEX and CPO models, the solutions are not provided in tables, but in files that you can download.
You can define output tables to appear in this view in a Python DOcplex model that
uses the syntax outputs['tablename'],
see Input and output data.
The Relaxations and Conflicts tabs show if there have been any conflicting constraints or bounds in the model. Also, if these options were chosen, these tabs show which constraints or bounds were relaxed in order to solve the model.
The Engine statistics tab shows you information about the run status (processed, stopped, or failed), graphical information about the solution, and model statistics. You can zoom-in on the graph by moving the end points of the horizontal zoom bar, or by selecting an area in the graph. To restore the original graph after zooming in, you can fully expand the zoom bar or refresh the page.
The Log tab displays the log file from the CPLEX or CP Optimizer engines, which you can also download.
For multi-objective models formulated with the Modeling Assistant, the solution table also displays the sliders, weights, and scale factors that were set in the model. The combined objective is the sum of all the objective values (positive additions for minimize objectives and negative for maximize objectives) multiplied by the scale factor (1 by default) and the weight factor. The weight factor is 2 to the power of the slider weight minus 1. For example, a slider weight of 5, the weight factor is 25-1= 24= 16. The scaled weighted value is thus the objective function value multiplied by this weight factor.
To run models, you must associate a Watson Machine Learning instance with your Project and associate a deployment space with your Decision Optimization experiment. You must also have the Editor or Admin role in the deployment space.
You can export files from this view. See Exporting data.
Scenario pane
When you create a new Decision Optimization experiment, a scenario is automatically created along with the model. A scenario contains data sets, a model, and a solution.
- Make sure a specific model works with a variety of data.
- See how different data sets impact the solution to your problem.
- See how a model formulation impacts the solution to your problem.
- Save the scenario as a model for deployment (any run configuration parameters that you might have set for that scenario are also saved in the deployment). See Deploying a Decision Optimization model by using the user interface for more details.
From the Scenario pane you can easily manage different scenarios of a Decision Optimization experiment.
To open the Scenario pane, click the Open scenario pane button .
- Create new scenarios (create a new scenario from scratch, duplicate your current scenario, or import a new scenario from a file).
- Select the scenario that you want to work in.
- See existing scenarios and their details (input data, model, solution). Each one can be expanded or collapsed by clicking the arrow next to the scenario.
- Manage existing scenarios (duplicate, rename, delete).
- Generate a Python notebook from a scenario.
- Save the scenario as a model for deployment (The data types set in the Prepare data view and any run configuration parameters that you might have set for that scenario are also saved in the deployment). See Deploying a Decision Optimization model by using the user interface for more details.
- Export the scenario as a
.zip
file.
If you click Generate a notebook from a scenario, the notebook is saved as an asset in your project. If you have used multiple files in the Build model view, these files are automatically referenced in the generated notebook so that you can read them from the notebook. The Python version for your generated notebook depends on the environment that you have configured for your scenario, see Configuring environments. If the environment was automatically created for your scenario, the notebook uses the default Python version 3.10.
If you click Export as zip file, a
scenario.json file that describes the exported model is also included in the
archive. If you make changes locally to this scenario (for example you add a table to your model),
you can then edit this json
file to include these changes. You can then re-import
your scenario and these changes appear in your scenario.
New scenarios can be imported by choosing From
file in the Create Scenario menu and then selecting the
.zip
file that contains your new scenario.
You can also use this method to create a new scenario from a debug .zip
file
that you have generated (see Custom
parameters) and downloaded. The debug .zip
file provides you with a scenario
that contains data, model, solution, and the run configuration parameters.
You can switch scenarios while running a model and see in the scenario pane which scenarios are running or are queued.
Clicking the arrow next to a scenario in this pane also reveals summary information about the data, model, and solution.
Your scenario uses the default run environment that was created for that model type. You can view this default on the Overview Information pane Environment tab. For more information, see Hardware and software configuration and Configuring environments. To change the run environment for a particular scenario see Selecting a different run environment for a particular scenario.