Environments

When you run analytical assets in tools, the runtime environment details are specified by environment definitions.

Environment definitions specify the hardware and software configurations for the environment runtimes:

  • Hardware resources include the amount of processing power and available RAM.
  • Software resources include the Python, R, or Scala programming languages, a set of pre-installed libraries, and optional libraries or packages that you can specify.

Environment definitions can be defined by:

  • The default environment definitions that are included with Watson Studio.
  • Custom environment definitions that you create.
  • Provisioning associated services, such as the IBM Analytics Engine service.

You need to specify an environment definition:

  • To run analytical assets in tools like the notebook editor, model builder, or the flow editor.
  • To create jobs to run Data Refinery flows or notebooks.
  • To launch IDEs like RStudio in Watson Studio in which to run analytical assets like notebooks.

All default and custom environment definitions are listed on the project’s Environments page in the environment definitions list. Clicking an environment definition, displays the environment definition details. An environment runtime is an instantiation of the environment definition. When a runtime becomes active, it is listed on the Environments page in the active environment runtimes list.

The following table lists the default environment definitions or compute power by analytical asset type. Note that if you have a Watson Studio Lite plan, you can’t use large environment definitions. See Offering plans.

Analytical asset Programming language Tool Environment definition type Available environment definitions/compute power
Jupyter notebook Python notebook editor Anaconda Python distribution Default Python 3.6 XS
Default Python 3.6 S
  Python and Decision Optimization notebook editor Anaconda Python distribution Default Python 3.6 XS + DO
  Python notebook editor Spark Default Spark 2.4 & Python 3.6
Default Spark Python 3.6 (Uses Spark 2.3)
  Python notebook editor GPU Default GPU Python 3.6
  R notebook editor Anaconda R distribution Default R 3.6 S
Default R 3.4 XS
Default R 3.4 S
  R notebook editor Spark Default Spark 2.4 & R 3.6
Default Spark R 3.4 (with Spark 2.3)
  Scala notebook editor Spark Default Spark 2.4 & Scala 2.11
Default Spark Scala 2.11 (with Spark 2.3)
  R RStudio Anaconda R distribution Default RStudio L
Default RStudio M
Default RStudio XS
Data Refinery flow R Data Refinery Spark Default Spark 2.4 & R 3.6
Default Spark R 3.4 (with Spark 2.3)
  R Data Refinery Spark Hadoop cluster
SPSS modeler flow SPSS algorithms Flow editor Without Spark IBM SPSS Modeler
Spark modeler flow Python Flow editor Spark Default Spark 2.4 & Python 3.6
Default Spark Python 3.6
  R Flow editor Spark Default Spark 2.4 & R 3.6
Default Spark R 3.4
  Scala Flow editor Spark Default Spark 2.4 & Scala 2.11
Default Spark Scala 2.11
AutoAI experiment No coding AutoAI experiment builder NA 8 vCPU and 32 GB RAM

Learn more