Using Spark in RStudio
Although the RStudio IDE cannot be started in a Spark with R environment runtime, you can use Spark in your R scripts and Shiny apps by accessing Spark kernels programmatically.
RStudio uses the
sparklyr package to connect to Spark from R. The
sparklyr package includes a
dplyr interface to Spark data frames as well as an R interface to Spark’s distributed machine learning pipelines.
There are two methods of connecting to Spark from RStudio:
- By connecting to a Spark kernel that runs locally in the RStudio container in IBM Watson Studio
- By connecting to a remote Spark kernel that runs outside of IBM Watson Studio in an Analytics Engine powered by Apache Spark service instance.
RStudio includes sample code snippets that show you how to connect to a Spark kernel in your applications for both methods.
To use Spark in RStudio after you have launched the IDE:
ibm_sparkaas_demosdirectory under your home directory and open it. The directory contains the following R scripts:
- A readme with details on the included R sample scripts
spark_kernel_basic_local.Rincludes sample code of how to connect to a local Spark kernel
spark_kernel_basic_remote.Rincludes sample code of how to connect to a remote Spark kernel
- The files
sparkaas_mtcars.Rare two examples of how to use Spark in a small sample application
Use the sample code snippets in your R scripts or applications to help you get started using Spark.