Learn about how compute usage is measured using capacity unit hours (CUH) consumed by an active environment runtime in watsonx.ai Studio. watsonx.ai Studio plans govern how you are billed monthly for the resources you consume.
Feature | Lite | Professional | Standard (legacy) | Enterprise (legacy) |
---|---|---|---|---|
Processing usage | 10 CUH per month |
Unlimited CUH billed for usage per month |
10 CUH per month + pay for more |
5000 CUH per month + pay for more |
Feature | Lite | Professional |
---|---|---|
Processing usage | 10 CUH per month | Unlimited CUH billed for usage per month |
Capacity units per hour for notebooks
Capacity type | Language | Capacity units per hour |
---|---|---|
1 vCPU and 4 GB RAM | Python R |
0.5 |
2 vCPU and 8 GB RAM | Python R |
1 |
4 vCPU and 16 GB RAM | Python R |
2 |
8 vCPU and 32 GB RAM | Python R |
4 |
16 vCPU and 64 GB RAM | Python R |
8 |
Driver: 1 vCPU and 4 GB RAM; 1 Executor: 1 vCPU and 4 GB RAM | Spark with Python Spark with R |
1 CUH per additional executor is 0.5 |
Driver: 1 vCPU and 4 GB RAM; 1 Executor: 2 vCPU and 8 GB RAM | Spark with Python Spark with R |
1.5 CUH per additional executor is 1 |
Driver: 2 vCPU and 8 GB RAM; 1 Executor: 1 vCPU and 4 GB RAM; | Spark with Python Spark with R |
1.5 CUH per additional executor is 0.5 |
Driver: 2 vCPU and 8 GB RAM; 1 Executor: 2 vCPU and 8 GB RAM; | Spark with Python Spark with R |
2 CUH per additional executor is 1 |
Driver: 3 vCPU and 12 GB RAM; 1 Executor: 3 vCPU and 12 GB RAM; | Spark with Python Spark with R |
2 CUH per additional executor is 1 |
The rate of capacity units per hour consumed is determined for:
-
Default Python or R environments by the hardware size and the number of users in a project using one or more runtimes
For example: The
IBM Runtime 24.1 on Python 3.10 XS
with 2 vCPUs will consume 1 CUH if it runs for one hour. If you have a project with 7 users working on notebooks 8 hours a day, 5 days a week, all using theIBM Runtime 24.1 on Python 3.10 XS
environment, and everyone shuts down their runtimes when they leave in the evening, runtime consumption is5 x 7 x 8 = 280 CUH per week
.The CUH calculation becomes more complex when different environments are used to run notebooks in the same project and if users have multiple active runtimes, all consuming their own CUHs. Additionally, there might be notebooks, which are scheduled to run during off-hours, and long-running jobs, likewise consuming CUHs.
-
Default Spark environments by the hardware configuration size of the driver, and the number of executors and their size.
Capacity units per hour for notebooks with Decision Optimization
The rate of capacity units per hour consumed is determined by the hardware size and the price for Decision Optimization.
Capacity type | Language | Capacity units per hour |
---|---|---|
1 vCPU and 4 GB RAM | Python + Decision Optimization | 0.5 + 5 = 5.5 |
2 vCPU and 8 GB RAM | Python + Decision Optimization | 1 + 5 = 6 |
4 vCPU and 16 GB RAM | Python + Decision Optimization | 2 + 5 = 7 |
8 vCPU and 32 GB RAM | Python + Decision Optimization | 4 + 5 = 9 |
16 vCPU and 64 GB RAM | Python + Decision Optimization | 8 + 5 = 13 |
Capacity units per hour for notebooks with Watson Natural Language Processing
The rate of capacity units per hour consumed is determined by the hardware size and the price for Watson Natural Language Processing.
Capacity type | Language | Capacity units per hour |
---|---|---|
1 vCPU and 4 GB RAM | Python + Watson Natural Language Processing | 0.5 + 5 = 5.5 |
2 vCPU and 8 GB RAM | Python + Watson Natural Language Processing | 1 + 5 = 6 |
4 vCPU and 16 GB RAM | Python + Watson Natural Language Processing | 2 + 5 = 7 |
8 vCPU and 32 GB RAM | Python + Watson Natural Language Processing | 4 + 5 = 9 |
16 vCPU and 64 GB RAM | Python + Watson Natural Language Processing | 8 + 5 = 13 |
Capacity units per hour for Synthetic Data Generator
Capacity type | Capacity units per hour |
---|---|
2 vCPU and 8 GB RAM | 7 |
Capacity units per hour for SPSS Modeler flows
Name | Capacity type | Capacity units per hour |
---|---|---|
Default SPSS Modeler S |
2 vCPU and 8 GB RAM | 1 |
Default SPSS Modeler M |
4 vCPU and 16 GB RAM | 2 |
Default SPSS Modeler L |
6 vCPU and 24 GB RAM | 3 |
Capacity units per hour for Data Refinery and Data Refinery flows
Name | Capacity type | Capacity units per hour |
---|---|---|
Default Data Refinery XS runtime | 3 vCPU and 12 GB RAM | 1.5 |
Default Spark 3.4 & R 4.2 | 2 Executors each: 1 vCPU and 4 GB RAM; Driver: 1 vCPU and 4 GB RAM | 1.5 |
Capacity units per hour for RStudio
Name | Capacity type | Capacity units per hour |
---|---|---|
Default RStudio XS | 2 vCPU and 8 GB RAM | 1 |
Default RStudio M | 8 vCPU and 32 GB RAM | 4 |
Default RStudio L | 16 vCPU and 64 GB RAM | 8 |
Capacity units per hour for GPU environments
Capacity type | GPUs | Language | Capacity units per hour |
---|---|---|---|
1 x NVIDIA Tesla V100 | 1 | Python with GPU | 68 |
2 x NVIDIA Tesla V100 | 2 | Python with GPU | 136 |
Resource usage for Prompt Lab
Prompt Lab does not consume compute resources. Prompt Lab usage is measured by the number of processed tokens. See Billing details for generative AI assets.
Learn more
- For information on monitoring your account's resource usage, see Monitoring account resource usage.
- For details on computing resource allocation and consumption, see wastonx.ai Studio environments compute usage
Parent topic: watsonx.ai Studio plans