4.1.2.3 Trigger Scenario Execution
Execute scenario for given combination of Run Dates and Threshold Set Ids.
Execute Scenario
%python-pgx
if run_flag == True:
asc.asc_cleanup4rerun()
asc.execute_scenario()
Execution Status
- Current Executions: It shows only executions which are currently running in the notebook python session.
- All Executions: It shows current executions along with executions from the previous runs/sessions for the same definition id and version.
Detailed Execution Status
Note:
Scenario having multiple Job Ids, each Job Id status must have COMPLETED, then only execution for given run date would be considered as COMPLETED. Otherwise, it will be marked as FAILED.Error log will display the error occurred in paragraph from the Utility Scenario notebook.
Upon successful completion of each run, the post processing step can be completed as described below. This moves the generated events into a table in the schema which can be retrieved for further analysis.
Re-run/Update Flag and All Execution status
The user may want to execute the failed run dates again, considering the post-processing which was not yet executed for the current definition id and version. The run dates status in All Execution Status would be updated with the latest status of the run dates, whether the run_flag is True or False.
Only after executing post-processing, the asc_runid_lookup table gets entry for the current definition id and version.
Users must use the Re-run/Update flag is set to True for making any updates to existing run dates or the addition of any new run dates to the existing definition and version in the asc_runid_lookup table.
Upon completion of each run, only the COMPLETED run dates will be considered for the post processing step, which is described below. This moves the generated events into a table in the schema, which can be retrieved for further analysis.
Show Available Scenario Bindings
%python-pgx
asc.show_scenario_bindings()
Set Expression for Tunable Parameters
Note:
For many scenarios, the available bindings include both a Base and Func versions of a parameter. In such cases, determine the Curr_Type the scenario has been configured to use. If it has been set to F, the Func version of the parameter should be used; otherwise, the Base version should be used.%python-pgx
tunable_parameters = "TOTAL_DEPOSIT_AMOUNT | TOTAL_WITHDRAWAL_AMOUNT"
setattr(asc,'tunable_parameters',tunable_parameters)
Note:
Using setattr method, the expression for tunable parameters is stored into class object for PreProd analysis.Post Execution Activity
%python-pgx
asc.scenario_post_processing()
This API
perform the post scenario execution activities as follows:- Creating a unique asc_run_id in the ASC_RUNID_LOOKUP table.
- Loading ATL alerts for given run_id in the ASC_INVESTIGATED_ENTITIES table.
- Loading scenario data for given Run Dates into ASC_EVENT_MASTER table.
Review Results
You can review the results of scenario execution. The summary of the event volumes is generated as shown below.
Event Volume
- RUN_DATE: Aggregate alerts by different run dates.
- EVENT_TAG: Aggregate alerts by event tag (ATL or BTL).
- RUN_DATE_AND_EVENT_TAG: Aggregate alerts by RUN DATE and EVENT TAG.
- JURISDICTION_AND_RUN_DATE: Aggregate alerts by JURISDICTION and RUN_DATE.
- RUN_DATE_AND_THRESHOLD_SET_ID: Aggregate alerts by RUN DATE and THRESHOLD SET ID.
- EVENT_TAG_AND_THRESHOLD_SET_ID: Aggregate alerts by EVENT_TAG and THRESHOLD SET ID.
- THRESHOLD_SET_ID: Aggregate alerts by THRESHOLD SET ID.
- ALL: Aggregate alerts by all inputs
together (by SEGMENT_ID, RUN_DATE, EVENT_TAG).
asc.save_object()
Save Object
It saves the current ASC object in DB tracking table for later use in ATL, BTL, Impact and PreProd Analysis notebooks.
%python-pgx
asc.save_object()