Create an Experiment Run in a Notebook with Sample Code (Preview)

You can create runs for an experiment within a notebook by modifying sample code with existing experiment details.

  1. Navigate to your notebook where you want to create a run for an experiment.
  2. Click the Experiments tab.
  3. Click Sample code.
  4. In the sample code block, replace experiment name="Customer Churn Prediction" with experiment name="<your_experiment_name>". You can also copy this code and modify it with your experiment name:
    import mlflow 
    experiment_name = "experiment name" #Replace this with your own experiment name 
    mlflow.set_experiment(experiment_name) 
    
    mlflow.autolog() 
    
    with mlflow.start_run(): 
        # training code goes here 
        # OPTIONAL - Log additional items after training 
        mlflow.log_metric("custom_metric", 99.9) 
  5. Autologs automatically record a default set of metrics, depending on the model selected. To manually specify your own metrics, you can modify this code to invoke mlflow.log_metric(“<metric_name>”,<metric_variable>):
    import mlflow 
    import numpy as np 
    from sklearn.tree import DecisionTreeRegressor 
    from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score 
    
    experiment_name = "Customer Churn Prediction" 
    mlflow.set_experiment(experiment_name) 
    
    mlflow.autolog() 
    
    with mlflow.start_run(): 
        # training code goes here 
        model = DecisionTreeRegressor(random_state=42, max_depth=5) 
        model.fit(X_train, y_train) 
    
        preds = model.predict(X_test) 
    
        mse = mean_squared_error(y_test, preds) 
        rmse = float(np.sqrt(mse)) 
    
        mlflow.log_metric("test_rmse", rmse) 
        mlflow.log_metric("test_mae", float(mean_absolute_error(y_test, preds))) 
        mlflow.log_metric("test_r2", float(r2_score(y_test, preds))) 
        # OPTIONAL - Log additional items after training 
        mlflow.log_metric("custom_metric", 99.9) 
  6. Run the code block from your notebook. The run is now registered to specified experiment.

    Note:

    Multiple runs for an experiment are automatically logged with different names. For parameter sweep scenarios, AI Data Platform Workbench automatically captures all the runs and specified metrics with different names to the specified experiment.