Using Jupyterhub
Jupyterhub lets multiple users work together by providing an individual Jupyter notebook server for each user. When you create a cluster, Jupyterhub is installed and configured on your cluster nodes.
Jupyterhub is only available in Big Data Service clusters with version 3.0.7 and above.
Prerequisites
Before Jupyterhub can be accessed from a browser, an administrator must:
- Make the node available to incoming connections from users. The node's private IP address needs to be mapped to a public IP address. Alternatively, the cluster can be set up to use a bastion host or Oracle FastConnect. See Connecting to Cluster Nodes with Private IP Addresses.
- Open port
8000
on the node by configuring the ingress rules in the network security list. See Defining Security Rules.
Jupyterhub Default Credentials
The default admin login credentials for JupyterHub in Big Data Service 3.0.21 and earlier are:
- User name:
jupyterhub
- Password: Apache Ambari admin password. This is the cluster admin password that was specified when the cluster was created.
- Principal name for HA cluster:
jupyterhub
- Keytab for HA cluster:
/etc/security/keytabs/jupyterhub.keytab
The default admin login credentials for JupyterHub in Big Data Service 3.0.22 and later are:
- User name:
jupyterhub
- Password: Apache Ambari admin password. This is the cluster admin password that was specified when the cluster was created.
- Principal name for HA cluster:
jupyterhub/<FQDN-OF-UN1-Hostname>
- Keytab for HA cluster:
/etc/security/keytabs/jupyterhub.keytab
Example:Principal name for HA cluster: jupyterhub/pkbdsv2un1.rgroverprdpub1.rgroverprd.oraclevcn.com Keytab for HA cluster: /etc/security/keytabs/jupyterhub.keytab
The admin creates additional users and their login credentials, and provides the login credentials to those users. See Managing Users and Permissions.
Unless explicitly referenced as some other type of administrator, the use of
administrator
or admin
throughout this section refers
to the JupyterHub administrator, jupyterhub
.
Accessing Jupyterhub
Jupyterhub runs on the second utility node of an HA (highly-available) cluster, or on the first (and only) utility node of a non-HA cluster.
Alternatively, you can access the Jupyterhub link from the cluster's details page in the Console.
You can also create a load balancer to provide a secure front end for accessing services, including JupyterHub. See Connecting to Services on a Cluster Using Load Balancer.
Spawning Notebooks
The prerequisites must be met for the user trying to spawn notebooks.
- Access Jupyterhub.
- Login with your user credentials. The authorisation works only if the user is present on the Linux host. Jupyterhub searches for the user on the Linux host while trying to spawn the notebook server.
- You are redirected to a Server Options page where you must request a Kerberos ticket. This ticket can be requested using either Kerberos principal and the keytab file, or the Kerberos password. Cluster admin can provide the Kerberos principal and the keytab file, or the Kerberos password.
The Kerberos ticket is needed to get access on the HDFS directories and other Big Data Services that you want to use.
The prerequisites must be met for the user trying to spawn notebooks.
- Access Jupyterhub.
- Login with your user credentials. The authorisation works only if the user is present on the Linux host. Jupyterhub searches for the user on the Linux host while trying to spawn the notebook server.
Launching Kernels and Running Spark Jobs
Sample Code for Python Kernel:
Sample Code For Sparkmagic in PySpark Kernel
Managing Jupyterhub
A Jupyterhub admin
user can perform the following tasks to manage
notebooks in Jupyterhub.
To manage Oracle Linux 7 services with the systemctl
command, see Working With System Services.
To log into an Oracle Cloud Infrastructure instance, see Connecting to Your Instance.
Configuring Jupyterhub
As an admin
, you can configure Jupyterhub.
Stopping and Starting Jupyterhub
As an admin
, you can stop or disable the application so it doesn't consume
resources, such as memory. Restarting might also help with unexpected issues or
behavior.
Managing Notebook Limits
As an admin
, you can limit the number of active notebook servers in
your cluster.
To edit these settings:
Updating Notebook Content Manager
By default, notebooks are stored in HDFS directory of a cluster.
You must have access to the HDFS directory hdfs:///user/<username>/
.
The notebooks are saved in hdfs:///user/<username>/notebooks/
.
- Connect as
opc
user to the utility node where Jupyterhub is installed (the second utility node of an HA (highly-available) cluster, or the first and only utility node of a non-HA cluster). - Use
sudo
to manage Jupyterhub configs that are stored at/opt/jupyterhub/jupyterhub_config.py
.c.Spawner.args = ['--ServerApp.contents_manager_class="hdfscm.HDFSContentsManager"']
- Use
sudo
to restart Jupyterhub.sudo systemctl restart jupyterhub.service
As an admin user, you can store the individual user notebooks in Object Storage instead of HDFS. When you change the content manager from HDFS to Object Storage, the existing notebooks are not copied over to Object Storage. The new notebooks are saved in Object Storage.
- Connect as
opc
user to the utility node where Jupyterhub is installed (the second utility node of an HA (highly-available) cluster, or the first and only utility node of a non-HA cluster). - Use
sudo
to manage Jupyterhub configs that are stored at/opt/jupyterhub/jupyterhub_config.py
. See generate access and secret key to learn how to generate the required keys.c.Spawner.args = ['--ServerApp.contents_manager_class="s3contents.S3ContentsManager"', '--S3ContentsManager.bucket="<bucket-name>"', '--S3ContentsManager.access_key_id="<accesskey>"', '--S3ContentsManager.secret_access_key="<secret-key>"', '--S3ContentsManager.endpoint_url="https://<object-storage-endpoint>"', '--S3ContentsManager.region_name="<region>"','--ServerApp.root_dir=""']
- Use
sudo
to restart Jupyterhub.sudo systemctl restart jupyterhub.service
Installing Additional Notebook Kernels
By default, Python, PySpark, Spark, SparkR kernels are supported.
As an admin user, to install additional kernels or libraries:
-
Connect as
opc
user to the utility node where Jupyterhub is installed (the second utility node of an HA (highly-available) cluster, or the first and only utility node of a non-HA cluster). -
Use pip3 module to install additional kernels. The sparkmagic configs
(
config.json
) are stored under the.sparkmagic
folder in the user's home directory.
Managing Users and Permissions
Use one of the two authentication methods to authenticate users to Jupyterhub so that they can create notebooks, and optionally administer Jupyterhub.
By default, ODH clusters support native authentication. But, authentication for Jupyterhub and other Big Data services need to be handled differently. To spawn single user notebooks, the user logging in into Jupyterhub needs to be present on the Linux host and needs to have permissions to write to the root directory in HDFS. Otherwise the spawner will fail as the notebook process is triggered as the Linux user.
Using Native Authentication
Native authentication depends on the Jupyterhub user database for authenticating users.
Native authentication applies to both HA and non-HA clusters. Refer native authenticator for details on the native authenticator.
These prerequisites must be met to authorize a user in a HA cluster using native authentication.
These prerequisites must be met to authorize a user in a non-HA cluster using native authentication.
Admin users are responsible for configuring and managing Jupyterhub. Admin users are also responsible for authorizing newly signed up users on Jupyterhub.
Before adding an admin user, the prerequisites must be met for an HA cluster or non-HA cluster.
-
Add admin users to the Jupyterhub config file
/opt/jupyterhub/jupyterhub_config.py
. - Access Jupyterhub.
-
Sign up an admin user. Default admin username is
jupyterhub
.
Before adding other users, the prerequisites must be met for an HA cluster or non-HA cluster.
An admin user can delete users.
- Access Jupyterhub.
- Open File → HubControlPanel.
- Navigate to the Authorize Users page.
- Delete the users you want to remove.
Using LDAP Authentication
To use LDAP authenticator, you must update the Jupyterhub config file with the LDAP connection details.
Refer LDAP authenticator for details on the LDAP authenticator.
Integrating with Object Storage
In Jupyterhub, for Spark to work with Object Storage you must define some system
properties and populate them into the spark.driver.extraJavaOption
and
spark.executor.extraJavaOptions
properties in Spark configs.
Before you can successfully integrate Jupyterhub with Object Storage, you must:
- Create a bucket in Object Store to store your data.
- Create an Object Storage API key.
The properties you must define in Spark configs are:
TenantID
Userid
Fingerprint
PemFilePath
PassPhrase
Region
The retrieve the values for these properties:
- Open the navigation menu and click Analytics & AI. Under Data Lake, click Big Data Service.
- Under Compartment, select the compartment that hosts your cluster.
- In the list of clusters, click the cluster you are working with that has Jupyterhub.
- Under Resources click Object Storage API keys.
- From the actions menu of the API key you want to view, click View configuration file.
The configuration file has all the system properties details except the passphrase. The passphrase is specified while creating the Object Storage API key and you must recollect and use the same passphrase.
- Access Jupyterhub.
- Open a new notebook.
- Copy and paste the following commands to connect to Spark.
import findspark findspark.init() import pyspark
- Copy and paste the following commands to create a Spark session with the specified configurations. Replace the variables with the system properties values you retrieved previously.
from pyspark.sql import SparkSession spark = SparkSession \ .builder \ .enableHiveSupport() \ .config("spark.driver.extraJavaOptions", "-DBDS_OSS_CLIENT_REGION=<Region> -DBDS_OSS_CLIENT_AUTH_TENANTID=<TenantId> -DBDS_OSS_CLIENT_AUTH_USERID=<UserId> -DBDS_OSS_CLIENT_AUTH_FINGERPRINT=<FingerPrint> -DBDS_OSS_CLIENT_AUTH_PEMFILEPATH=<PemFile> -DBDS_OSS_CLIENT_AUTH_PASSPHRASE=<PassPhrase>")\ .config("spark.executor.extraJavaOptions" , "-DBDS_OSS_CLIENT_REGION=<Region> -DBDS_OSS_CLIENT_AUTH_TENANTID=<TenantId> -DBDS_OSS_CLIENT_AUTH_USERID=<UserId> -DBDS_OSS_CLIENT_AUTH_FINGERPRINT=<FingerPrint> -DBDS_OSS_CLIENT_AUTH_PEMFILEPATH=<PemFile> -DBDS_OSS_CLIENT_AUTH_PASSPHRASE=<PassPhrase>")\ .appName("<appname>") \ .getOrCreate()
- Copy and paste the following commands to create the Object Storage directories and file, and store data in Parquet Format.
demoUri = "oci://<BucketName>@<Tenancy>/<DirectoriesAndSubDirectories>/" parquetTableUri = demoUri + "<fileName>" spark.range(10).repartition(1).write.mode("overwrite").format("parquet").save(parquetTableUri)
- Copy and paste the following command to read data from Object Storage.
spark.read.format("parquet").load(parquetTableUri).show()
- Run the notebook with all these commands.
The output of the code is displayed. You can navigate to the Object Storage bucket from the Console and find the file created in the bucket.
Integrating with Trino
Prerequisite
- Trino must be installed and configured in Big Data Service cluster.
- Install the following python module in the JupyterHub node (UN1 for HA / UN0 for non-HA cluster) Note
Ignore this step if the Trino-Python module is already present in the node.python3.6 -m pip install trino==0.309.0 Offline Installation: Download the required python module in any machine where we have internet access Example: python3 -m pip download }}trino==0.309.0{} -d /tmp/package Copy the above folder content to the offline node & install the package python3 -m pip install ./package/* Note : trino python is compatible with the latest 1.3.x and 1.4.x SQLAlchemy versions & with Python >=3.6 version BDS cluster node comes with python3.6 and SQLAlchemy-1.4.46 by default.
Integrating with Big Data Service HA cluster
If the Trino-Ranger-Plugin is enabled, then be sure to add the provided keytab user in the respective Trino ranger policies. See Integrating Trino with Ranger.
By default, Trino uses the full Kerberos principal name as the user. Therefore, when adding/updating Ranger-Trino policies, you must use full Kerberos principal name as username.
For the following code sample, use jupyterhub@BDSCLOUDSERVICE.ORACLE.COM as the user in the trino-ranger policies.
If the Trino-Ranger-Plugin is enabled, be sure to add the provided keytab user in the respective Trino Ranger policies. For more details see Enabling Ranger for Trino.
Provide Ranger permissions for Jupyterhub to the following policies:
all - catalog, schema, table, column
all - function
Integrating with Big Data Service non-HA cluster
Integrating Trino Python Client from Outside Big Data Service Environment
Integrate Trino Python client from outside of Big Data Service.
Sample Script to Query on Table
from sqlalchemy import create_engine
from sqlalchemy.schema import Table, MetaData
from sqlalchemy.sql.expression import select, text
from trino.auth import KerberosAuthentication
from subprocess import Popen, PIPE
import pandas as pd
# Provide user specific keytab_path and principal. If user wants to run queries with different keytab then user can update below keytab_path & user_principal else #user can use same keytab_path, principal that is used while starting the notebook session.
#Refer below sample code
keytab_path='/tmp/trino.service.keytab'
user_principal='trino/daily-cluster-odh1-17-mn0.bmbdcsad1.bmbdcs.oraclevcn.com@BDSCLOUD.ORACLE.COM'
# Cert path is required for SSL.
cert_path= '/tmp/oraclerootCA.crt'
# trino url = 'trino://<trino-coordinator>:<port>'
trino_url='trino://daily-cluster-odh1-17-mn0.bmbdcsad1.bmbdcs.oraclevcn.com:7778'
# This is optional step, required only if user wants to run queries with different keytab.
kinit_args = [ '/usr/bin/kinit', '-kt', keytab_path, user_principal]
subp = Popen(kinit_args, stdin=PIPE, stdout=PIPE, stderr=PIPE)
subp.wait()
engine = create_engine(
trino_url,
connect_args={
"auth": KerberosAuthentication(service_name="trino", principal=user_principal, ca_bundle=cert_path),
"http_scheme": "https",
"verify": True
}
)
query = "select custkey, name, phone, acctbal from tpch.sf1.customer limit 10"
df = pd.read_sql(query, engine)
print(df)