7 Build Your Applications
After uploading the source code files to Git repositories, you can use the
Builds
page to create and configure build jobs and pipelines, run builds
and generate artifacts.
Configure and Run Project Jobs and Builds
Oracle Visual Builder Studio (VB Studio) includes continuous integration services to build project source files. You configure the builds from the Builds page.
The Builds page, also called the Jobs Overview page, displays information about all the project's build jobs and provides links to configure and manage them.
What Are Jobs and Builds?
A job is a configuration that defines your application's builds. A build is the result of a job’s run.
A job defines where to find the source code files, how and when to run builds, and the software and environment required to run builds. When a build runs, it packages application archives that can be deployed to a web server. A job can have hundreds of builds, each generating its own artifacts, logs, and reports.
Here are some terms that this documentation uses to describe the build features and components:
Term | Description |
---|---|
Build system | Software that automates the process of compiling, linking and packaging the source code into an executable form. |
Build executor template | A build executor template defines the operating system
and software packages installed on a VM build executor. A build executor
template must be created before VM build executors can be added to it.
See What Are VM Build Executors and Build Executor Templates?. |
VM build executor | A VM build executor is an OCI or OCI Classic VM compute
instance dedicated to running builds of jobs that organization members
define in VB Studio projects. A VM build executor is always associated with a build
executor template. Each build uses one VM build executor.
See What Are VM Build Executors and Build Executor Templates?. |
Build artifact | A file generated by a build. It could be a packaged archive file, such as a .zip or .ear file, that you can deploy to a build server. |
Trigger | A condition to run a build. |
Understand Software Packages
There are multiple versions of some software, such as Node.js and Java, listed in the Software Catalog. This software is referred to as software packages.
The version number of a package can be categorized into two: the major version and the minor version. If a software's version is 1.2.3
, then 1
is its major version and 2.3
is its minor version. In a software's tile, the major version number is displayed in the title of the package. In Configure Software page, the number shown in Version is the installed version, which includes both major and minor versions.
Here's an example. In this image, Node.js 0.12, 8, 10, and 12 are shown in the software catalog. In the Node.js 12 tile, 12
is the major version and 3.1
is its minor version. The installed version of the software is 12.3.1
.
Description of the illustration odcs_software_catalog_version.png
When a new minor version of a software package is available in the Software Catalog, all build executor templates using that software package are updated automatically. For example, assume that Node.js 10.13 is available in the Software Catalog for the Node.js 10 package. When Node.js 10.15 is made available in the Software Catalog, all build executor templates using the Node.js 10 package update automatically to use Node.js 10.15. If there’s an incompatibility between the upgraded software and other installed software of the build executor template, an error is reported with suggestions about the cause of the error.
When a new major version of a software package is available in the catalog, build executor templates using the older versions of the software package aren't updated automatically. The new major version of the software is added to the catalog as a separate package. For example, when Node.js 12 is available in the Software Catalog, all build executor templates using Node.js 0.12, Node.js 8, or Node.js 10 aren’t updated automatically. To use the new version, you must manually update the build executor templates to use the new package.
When a major version of a software is removed from the catalog, all build executor templates using that software version are updated automatically to use the next higher version. For example, when Node.js 8 phases out, build executor templates using Node.js 8 will be automatically updated to use Node.js 10.
Create and Manage Jobs
From the Builds page, you can create jobs that run builds and generate artifacts that you can deploy:
Action | How To |
---|---|
Create a blank job |
|
Copy an existing job |
There may be times that you want to copy parameters and a job configuration from one job to another. You can do that when you create a job. You cannot copy the configuration of an existing job to another existing job. After you create the new job, you can modify the copied parameters and configuration:
|
Create a job that accepts build parameters and will be associated with a merge request |
|
Create a job using YAML | In VB Studio, you can use a YAML file to create a job and define its configuration. The file is stored in a project's Git repository. See Configure Jobs and Pipelines with YAML. |
Configure a job |
The job configuration page opens immediately after you create a job. You can also
open it from the Jobs tab. Click
Configure
|
Run a build of a job |
In the Jobs tab, click Build Now
|
Delete a job |
In the Jobs tab, click Delete |
Configure a Job
You can create, manage, and configure jobs from the Jobs tab on the Builds page.
To open a job’s configuration page, go to the Jobs tab on the Builds page and click the job’s name. In the Job Details page, click
Configure
.
Configure a Job's Privacy Setting
The project owner can mark a job as private to restrict who can see or edit a job's configuration, or run its build:
You can see if a job is private from several places in the VB Studio user interface. A private job is indicated by a Lock
icon:
-
In the jobs list found on the Project Administration tile's Builds page's Job Protection tab, to the right of each protected job's name.
-
In the Private column on the Builds page's Jobs tab.
-
In the jobs shown in the the Builds page's Pipelines tab.
An unauthorized user can't run a private build job manually, or through a pipeline, or via an SCM/periodic trigger.
Access a Project's Git Repositories
You can configure a job to access a project’s Git repositories and their source code files:
- Open the job’s configuration page.
- Click Configure
, if necessary.
- Click the Git tab.
- From the Add Git list, select Git.
- In Repository, select the Git repository to track.
When you created the job, if you selected the Use for Merge Request check box, the field is automatically populated with the
${GIT_REPO_URL}
value. Don’t change it. - In Branch, select the branch name in the repository to track.
By default,
main
is set.When you created the job, if you selected the Use for Merge Request check box, Branch is automatically populated with the
${GIT_REPO_BRANCH}
value. Don’t change it unless you don’t want to link the job with a merge request. - Click Save.
If you specify multiple Git repositories, make sure that you set Local Checkout Directory for all Git repositories.
Trigger a Build Automatically on an SCM Commit
You can configure a job to monitor a Git repository and trigger a build automatically each time a commit is pushed.
- A trigger that's set up to build automatically on an SCM commit only works with repositories that are defined and stored in the same project. In this case, commits to Git repositories in your project are sufficient triggers for automatically initiating a build.
- An automatic trigger for a build job isn't recommended for an external
repository. In this case, a polling job should be set up. Only a
git push
operation will result in a build. - An automatic trigger for a build job isn't recommended for an
external repository that has been cloned as an internal repository either.
In this case, a polling job should be set up. Only a
git push
operation will result in a build.
To set up a polling-based build trigger, see Trigger a Build Automatically According to an SCM Polling Schedule.
Here's how to configure a job that monitors a Git repository in your project and triggers a build automatically when a Git commit is pushed to the repository being tracked:
- Open the job’s configuration page.
- Click Configure
, if necessary.
- Click the Git tab and either use the dropdown to select the repository you want to monitor or type the name of the repository in the entry field.
- For the Git repository you want to monitor, select the Automatically perform build on SCM commit check box.
- To include or exclude files when tracking changes in the repository, see Include or Exclude Files to Trigger a Build.
- To exclude users whose commits to the repository don’t trigger builds, in Excluded User, enter the list of user names.
- Click Save.
Trigger a Build Automatically According to an SCM Polling Schedule
SCM polling enables you to configure a job to periodically check the job’s Git repositories for any commits pushed since the job’s last build. If updates are found, it triggers a build. You can configure the job and specify the schedule in Coordinated Universal Time (UTC), the primary time standard by which the world regulates clocks and time. If you’re not a Cron expert, use the novice mode and set the schedule by specifying values. If you’re a Cron expert, use the Expert mode.
You can specify the schedule using Cron expressions:
- Open the job’s configuration page.
- In the Git tab, add the Git repository.
To include or exclude files when tracking changes in the repository according to a Cron expression, see Include or Exclude Files to Trigger a Build.
- Click Settings
.
- Click the Triggers tab.
- Click Add Trigger and select SCM Polling Trigger.
- To use the expert mode, select the Expert mode check box and enter the schedule in the text box.
The default pattern is
0/30 * * * *
, which runs a build every 30 minutes.After you edit the expression, it’s validated immediately when the cursor moves out of the text box. Note that other fields of the section aren’t available when the check box is selected.
- To use the novice mode, deselect the Expert mode check box and specify the schedule information. The page displays the generated Cron expression next to the Expert mode check box.
- To use the novice mode, deselect the Expert mode check box and specify the schedule information in Minute, Hour, Day of the Month, Month, and Day of the Week.
Click Toggle Recurrence to add or remove
0/
or1/
at the beginning of the value in the Cron expression.The page displays the generated Cron expression next to the Expert mode check box.
Tip:
To check the job’s Git repositories every minute, deselect all check boxes. Remember that this may consume large amounts of system resources. - If necessary, in Comment, enter a comment.
- To view and verify the build schedule of the next ten builds, from the timezone drop-down list, select your time zone and then click View Schedule.
- Click Save.
To see the SCM poll log of the job after the build runs, in the job's details page
or the build's details page, click SCM Poll
Log
.
Generate Cron Expressions
You can use Cron expressions to define periodic build patterns.
For more information about Cron, see http://en.wikipedia.org/wiki/Cron
.
You can specify the Cron schedule information in the following format:
MINUTE HOUR DOM MONTH DOW
where:
-
MINUTE
is minutes within the hour (0-59) -
HOUR
is the hour of the day (0-23) -
DOM
is the day of the month (1-31) -
MONTH
is the month (1-12) -
DOW
is the day of the week (1-7)
To specify multiple values, you can use the following operators:
-
*
to specify all valid values -
-
to specify a range, such as1-5
-
/
or*/X
to specify skips of X's value through the range, such as0/15
in theMINUTE
field for0,15,30,45
-
A,B,...,Z
can be used to specify multiple values, such as0,30
or1,3,5
Include or Exclude Files to Trigger a Build
When you've configured a job to monitor a Git repository, you can use fileset patterns to include or exclude files when tracking changes in the repository. Then, each time a change is committed, only changes to files that match these patterns determine whether a build is triggered or not.
Here's how you specify fileset patterns that use changes to files that match these patterns to determine whether a build is triggered or not:
- Open the job’s configuration page.
- Click Configure
, if necessary.
- Click the Git tab.
- Expand Advanced Git Settings and select either Include or Exclude.
- Click Include to specify a list of files and directories in the repository that you want to track for changes. By default, all files are included for tracking (**/*), meaning changes to any file or directory in the repository will trigger a build.
To change the default configuration, select Include and specify the fileset to be included in Directories/Files. You can use regular expressions (regex) or glob patterns to specify the fileset. Each entry must be separated by a new line.
You can extend this configuration to specify Exceptions to the included fileset. If changes occur only in the fileset specified as an exception, a build won't run.
Here are some glob pattern examples:Desired Outcome In Directories/Files, enter: In Exceptions, enter: Result Trigger a build following changes to .html
,.jpeg
, or.gif
files in themyapp/src/main/web/
directory:myapp/src/main/web/*.html
myapp/src/main/web/*.jpeg
myapp/src/main/web/*.gif
Leave blank A build runs when a .html
,.jpeg
, or.gif
file is changed in themyapp/src/main/web/
directory.Trigger a build following changes to .java
files, but not.html
files:*.java *.html A build runs when any .java
file is changed, except when all changed files are.html
files.Trigger a build following changes to .java
files, but nottest.java
:*.java test.java A build runs when any .java
file is changed, except whentest.java
is the only changed file. - Click Exclude to specify a list of files and directories in the repository that you don't want to track for changes. If all changes are only in the specified files, a build won’t be triggered. By default, no files are excluded, meaning all files and directories are tracked and therefore, changes to any file or directory in the repository will trigger a build.
To change the default configuration, select Exclude and specify the fileset to be excluded in Directories/Files. You can use regular expressions (regex) or glob to specify an excluded fileset. Each entry must be separated by a new line.
Optionally, specify files or directories within the excluded fileset that you want to include as Exceptions. If changes occur in the fileset specified as an exception, a build will be triggered.
Here are some glob pattern examples:Desired Outcome In Directories/Files, enter: In Exceptions, enter: Result Don’t trigger a build when only .java
files are changed:*.java Leave blank A build won't run when all changed files are .java
files, but changes in any other file (say,test.txt
andtest.html
) triggers a build.Don’t trigger a build when .java
files in the/myapp/mobile/
directory are changed, with the exception oftest.java
:/myapp/mobile/*.java test.java
A build won't run when all changes are in .java
files other thantest.java
in the/myapp/mobile/
directory. But a build runs whentest.java
in the/myapp/mobile/
directory is the only changed file.Don't trigger a build for changes to any file, except .sql
files:**/* *.sql A build runs only when .sql
files are changed.Don’t trigger a build when only .html
,.jpeg
, or.gif
files in themyapp/src/main/web/
directory are changed:myapp/src/main/web/*.html
myapp/src/main/web/*.jpeg
myapp/src/main/web/*.gif
Leave blank A build won't run when only .html
,.jpeg
, or.gif
files in themyapp/src/main/web
directory are changed.Don’t trigger a build when .gitignore
files are changed:*.gitignore Leave blank A build won't run when the only changed files are .gitignore
files.
- Click Include to specify a list of files and directories in the repository that you want to track for changes. By default, all files are included for tracking (**/*), meaning changes to any file or directory in the repository will trigger a build.
- Click Save.
Use External Git Repositories
If you use an external Git repository to manage source code files, you can configure a job to access its files when a build runs:
- If the external Git repository is a public repository, mirror it in the project or use its direct URL in the job configuration.
- If the external Git repository is a private repository, you must mirror it in the project. See Mirror an External Git Repository.
To configure a job to use an external Git repository:
Access Files in a Git Repository's Private Branch
To access a Git repository's private branch, configure the job to use SSH:
Publish Git Artifacts to a Git Repository
Git artifacts, such as tags, branches, and merge results can be published to a Git repository as a post-build action:
Advanced Git Options
When you configure the Git repositories of a job, you can also configure the job with some advanced Git options, such as change the remote name of the repository, set the checkout directory in the workspace, and whether to clean the workspace before a build runs.
You can perform these configuration actions from the Git tab of the job configuration page:
Action | How To |
---|---|
Change the remote name of a repository |
For the Git repository, expand Advanced Repository Options, and specify the new name in Name. The default remote name is |
Specify a reference specification of a repository |
A reference repository helps to speed up the builds of the job by creating a cache in the workspace and hence reducing the data transfer. When a build runs, instead of cloning the Git repository from the remote, the build executor clones it from the reference repository. To create a reference specification of a Git repository, expand Advanced Repository Options, and specify the name in Ref Spec. Leave the field empty to create a default reference specification. |
Specify a local checkout directory |
The local checkout directory is a directory in the workspace where the Git repository is checked out when a build runs. To specify the directory of a Git repository, expand Advanced Repository Options, and specify the path in Local Checkout Directory. If left empty, the Git repository is checked out on the root directory of the workspace. |
Include or exclude a list of files and directories to determine whether to trigger a build or not |
When you've enabled a build to be triggered either on each SCM commit or according to a polling schedule, expand Advanced Git Settings and select Include or Exclude.
For more examples, see Include or Exclude Files to Trigger a Build. |
Check out the remote repository’s branch and merge it into a local branch |
Expand Advanced Git Settings, in Merge another branch, specify the branch name to merge to. If specified, the build executor checks out the revision to build as If necessary, in Checkout revision, specify the branch to checkout and build as |
Configure Git user.name and user.email variables |
Expand Advanced Git Settings and in Config user.name and Config user.email, specify the user name and the email address. |
Merge to a branch before a build runs |
Expand Advanced Git Settings and select the Merge from another repository check box. In Repository, enter or select the name of the repository to be merged. In Branch, enter or select the name of the branch to be merged. If no branch is specified, the default branch of the repository is used. The build runs only if the merge is successful. |
Prune obsolete local branches before running a build |
Expand Advanced Git Settings and select the Prune remote branches before build check box. |
Skip the internal tag |
When a build runs, the build executor checks out the Git repository to the local repository of the workspace and applies a tag to it. To skip this process, expand Advanced Git Settings and deselect Skip internal tag check box. |
Remove untracked files before running a build |
Expand Advanced Git Settings and select the Clean after checkout check box. |
Retrieve sub-modules recursively |
Expand Advanced Git Settings and select the Recursively update submodules check box. |
Display commit’s author in the log |
By default, the Git change log shows the commit’s |
Delete all files of the workspace before a build runs |
Expand Advanced Git Settings and select the Wipe out workspace before build check box. |
View the SCM Changes Log
The SCM changes log displays files that were added, edited, or removed from the job’s Git repositories before the build was triggered.
You can view the SCM changes log from the job’s details page and a build’s details page. The Recent SCM Changes page that you open from the job’s details page shows SCM commits from the last 20 builds, in reverse order. The SCM Changes page that you open from a build’s details page shows SCM commits that happened after the previous build.
The log shows the build ID, commit messages, commit IDs, and affected files.
Trigger a Build Automatically on a Schedule
You can configure a job to run builds on a specified schedule that is specified in Coordinated Universal Time (UTC), the primary time standard by which the world regulates clocks and time.
Note:
Regardless of how you set up the schedule, your builds could be delayed if no VM build executors of the job's build executor template are free to run builds at the scheduled time, or if any of the VM build executors are in the Stopped/Pending state.-
Open the job’s configuration page.
-
Click Settings
.
-
Select the Triggers tab.
-
Click Add Trigger and select Periodic Build Trigger.
- You can specify the schedule using Cron expressions. If you’re a Cron expert, use the
Expert mode (see step 6).
If you’re not a Cron expert, use the novice mode and set the schedule by specifying values (see step 7).
-
To use the expert mode, select the Expert mode check box, and enter the schedule in the text box.
The default pattern is
0/30 * * * *
, which runs a build every 30 minutes.After you edit the expression, it’s validated as soon as you move the cursor outside the text box. Note that other fields of the section aren’t available when the check box is selected.
-
To use the novice mode, deselect the Expert mode check box and specify the schedule information in Minute, Hour, Day of the Month, Month, and Day of the Week.
Click Toggle Recurrence to add or remove
0/
or1/
at the beginning of the value in the Cron expression.The page displays the generated Cron expression next to the Expert mode check box.
-
If necessary, in Comment, enter a comment.
-
To view and verify the build schedule of the next ten builds, from the timezone drop-down list, select your time zone and then click View Schedule.
-
Click Save.
Use Build Parameters
You can use build parameters for passing additional information to a build when it runs when that information isn't available at job configuration time.
You can configure a job to use a parameter and its value as an environment variable or through variable substitution in other parts of the job configuration. When a build runs, a Configure Parameters dialog box opens so you can enter or change the default values of the parameters:
- Open the job’s detail page.
- Click Configure
.
- Click the Parameters tab.
- From the Add Parameter drop-down list, select the parameter type.
You can add these types of build parameters:
Use this parameter type ... To: String
Accept a string value from the user when a build runs. The parameter field appears as a text box in the Configure Parameters dialog.
Password/Private Key
Accept a password or private key value from the user when a build runs. The parameter field appears as a password box in the Configure Parameters dialog.
It's important to note that the password/private key setting isn't a toggle. If you change the selection from password to private key (or private key to password), you'll need to re-enter the password/private key value.
Boolean
Accept
true
orfalse
as input from the user when a build runs. The parameter field appears as a check box in the Configure Parameters dialog.Choice
Accept a value from a list of values when a build runs. The parameter field appears as a drop-down list in the Configure Parameters dialog. The first value is the default selected value.
Merge Request
Accepts string values for the Git repository URL, the Git repository branch name, and the merge request ID as input. The parameter fields appear as a text box in the Configure Parameters dialog.
Use this parameter when you want to link an existing job with a merge request.
- Enter values, such as name, default value, password/private key, and
description.
Note:
Parameter names must contain letters, numbers or underscores only. They can't begin with a number and they aren't case-sensitive (the names "job", "JOB", and "Job" are all treated the same).
You can't use hyphens in build parameter names. When the build system encounters a script or a command with a hyphenated build parameter name in a UNIX shell build step, it removes the portion of the name preceding the hyphen. If you try to use a hyphen in a build parameter name in a job, you won't be able to save the job configuration that includes it.
In addition, you shouldn't use an underscore by itself or any of the system or other environmental variable names listed in Reserved Words that Shouldn't Be Used in Build Parameter Names as build parameter names. There could be unintended consequences if you do.
- Click Save.
For example, if you want a job to change the default values for the Gradle version, the OCI username, and the OCI user password when a build runs, in a Build step, create the Choice, String, and Password/Private Key build parameters to accept the values. Notice that the value for the Password/Private Key parameter isn't displayed in the input field.
Use the $BUILD_PARAMETER
format
when you're using build parameters. (The ${BUILD_PARAMETER}
format can
be used too.) For example, this screen shot shows the Gradle version, OCI username, and
OCI password parameters used in the build step fields of a job. Notice that the
password/private key parameter's variable name isn't displayed:
When a build runs, the Configure Parameters dialog opens where you can enter or change the default values of parameters. All parameter values, except the Password/Private Key parameter's value, are displayed as string in the dialog box, and subsequently in the build log. This screenshot shows the dialog for a job configured to use a password parameter:
This screenshot shows the dialog for a similar job configured to use a private key
parameter instead of a password parameter:
Description of the illustration buildparameters_run_pkey.png Notice the difference instead of black dots) between the OCIPkey private key
parameter's default value (asterisks) and that for the OCIPwd password parameter's
default value shown in the previous screenshot.
If you selected the Use for Merge Request check box while creating the job, GIT_REPO_URL, GIT_REPO_BRANCH, and MERGE_REQ_ID Merge Request parameters are automatically added to accept the Git repository URL, the Git repository branch name, and the merge request ID as input from the merge request, respectively. The GIT_REPO_URL and GIT_REPO_BRANCH variables are automatically set in the Repository and Branch fields of the Git tab.
- Upstream job parameters are passed to downstream jobs.
For example, in a pipeline that flows from Job A to Job B to Job C, if parameter P1 is defined in Job A and parameter P2 is defined in Job B, then parameter P1 will be passed to Job B and parameters P1 and P2 will be passed to Job C.
- An upstream job with the same named parameter as a downstream job
will overwrite the default value of the named parameter from the downstream job.
For example, if parameters P1 and P2 are defined in Job A and parameters P2 and P3 are defined in Job B, then the value of parameter P2 from Job A will overwrite the default value of parameter P2 in Job B. If there was a Job C downstream from Job B, then the initial default value of P2 (from Job A) plus the values of P1 and P3 would be passed to Job C.
- When a build of the pipeline runs, the Configure Parameter dialog box displays all parameters of the jobs in the pipeline. Duplicate parameters are displayed once and its value is used by all jobs that use the parameter. The default value of a duplicate parameter comes from the first job in the pipeline where it is defined.
For example, in a pipeline that flows from Job A to Job B to Job C, if parameter P1 is defined in Job A; parameters P2 and P3 are defined in Job 2; and parameters P1 and P4 are defined in Job C; then when the pipeline runs, it displays parameters P1, P2, P3, and P4 once in the Configure Parameter dialog box though parameter P1 is defined in two jobs. The default value of P1 would come from Job 1 and is passed to subsequent jobs of the pipeline.
In the pipeline, if the Auto start when pipeline jobs are built externally is selected, then the Configure Parameter dialog box isn't displayed when a build of a pipeline's job runs. In the pipeline, the subsequent jobs of the job that trigger the build use the default values of their parameters. If a parameter is duplicate, then the job uses the default value of the first job where the parameter was defined.
For example, in a pipeline that flows from Job A to Job B to Job C, if parameter P1 is defined in Job A; parameters P2 and P3 are defined in Job B; and parameters P1 and P4 are defined in Job C; then when a build of Job A runs, it passes the default value of P1 to Job B and Job C, and overwrites the default of P1 in Job C. If a build of Job B runs, then the builds use the default values of P2, P3, P1 (defined in Job C) and P4.
To learn about how to use build parameters
in a Shell build step, see the GNU documentation on Shell Parameter Expansion at
https://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html
.
Reserved Words that Shouldn't Be Used in Build Parameter Names
A system environment variable shouldn't be used as a parameter name. If you use one of the following system environment variable names, the build might run incorrectly or even fail unexpectedly:
- home
- hostname
- lang
- path
- pwd
- shell
- term
- user
- username
In addition, you should avoid using the following environment variables, listed alphabetically, that may be used elsewhere, to avoid interfering with the plugin or the process that introduced them:
- _ (underscore)
- ant_home
- build_dir, build_id, build_number
- cvs_rsh
- dcspassbuildinfofeaturecurrentorg, dcspassbuildinfofeaturecurrentproject
- g_broken_filenames, git_repo_branch, git_repo_url, gradle_home
- histcontrol, histsize, http_proxy, http_proxy_host, http_proxy_port, https_proxy, https_proxy_host, https_proxy_port
- isdcspassbuildinfofeatureenabled
- java_home, java_vendor, javacloud_home, javacloud_home_11_1_1_7_1, javacloud_home_11g, javacloud_home_soa, javacloud_home_soa_12_2_1, job_name
- lessopen, logname
- m2_home, merge_req_id, middleware_home, middleware_home_11_1_1_7_1, middleware_home_11g, middleware_home_soa, middleware_home_soa_12_2_1
- no_proxy, no_proxy_alt, node_path
- oracle_home, oracle_home_11_1_1_7_1, oracle_home_11g, oracle_home_soa, oracle_home_soa_12_2_1
- qtdir, qtinc, qtlib
- shlvl, ssh_askpass
- tool_path
- wls_home, wls_home_11_1_1_7_1, wls_home_11g, wls_home_soa, wls_home_soa_12_2_1, workspace
Use a Named Password/Private Key
A named password/private key is a variable that users can use across a project's build job configurations, in any password/private key field in the job configuration, including external Git repositories as well as in SQLcl, PSMcli, and Docker configurations.
When the value of the password or private key changes, you can edit and reset it and the new value will be applied to all jobs and configurations where the variable is used. However, if you change the selection from password to private key (or the other way around), you must you re-enter a new value for the password or private key.
Note that the named password/private key is not an environment variable. To use a named password/private key as an environment variable, create a Password/Private Key build parameter and set it to use the named password/private key.
Create and Manage Named Passwords/Private Keys
If you're a project owner, you can create, edit, and delete named passwords/private keys:
Action | How To |
---|---|
Create a named password/private key |
After you create the named password or named private key, share its name with your project users. |
Edit a named password/private key |
|
Delete a named password/private key |
After you delete the named password or private key, let your project users know that it's no longer available. |
Configure a Job to Use a Named Password/Private Key
Here's how you can configure a job that uses a named password/private key:
- Open the job’s configuration page.
- Click the Parameters tab.
- Click Add Parameter and select Password/Private Key Parameter.
- In the Name field, enter a name for the parameter, then
do one of the following:
- Click Save.
Access an Oracle Cloud Service Using SSH
You can configure a job to use SSH to access any Oracle Cloud service instances that has SSH enabled, such as Oracle Cloud Infrastructure Compute Classic VMs.
-
Create an SSH tunnel to access a process running on a remote system, including an on-premise system, via the SSH port. The SSH tunnel is created at the start of the build job and is destroyed automatically when the job finishes.
-
Set up the default
~/.ssh
directory with the provided keys in the build’s workspace for use with the command-line tools. The modifications revert after the job finishes.
To connect to the Oracle Cloud service instance, you need IP address of the server, credentials of a user who can connect to the service instance, and local and remote port numbers:
Access the Oracle Maven Repository
The Oracle Maven Repository contains artifacts, such as ADF libraries, provided by Oracle. You may require these artifacts to compile, test, package, perform integration testing, or deploy your applications. For more information about the Oracle Maven repository, see https://maven.oracle.com/doc.html
.
To build your applications and access the Oracle Maven Repository, you configure the job and provide your credentials to access the repository:
-
Open
https://www.oracle.com/webapps/maven/register/license.html
in your web browser, sign in with your Oracle Account credentials, and accept the license agreement. -
Configure the POM file and add the Oracle Maven Repository details:
-
Add a
<repository>
element that refers tohttps://maven.oracle.com
:<repositories> <repository> <name>OracleMaven</name> <id>maven.oracle.com</id> <url>https://maven.oracle.com</url> </repository> </repositories>
Depending on your application, you may also want to add the
<pluginRepository>
element and make it refer tohttps://maven.oracle.com
:<pluginRepositories> <pluginRepository> <name>OracleMaven</name> <id>maven.oracle.com</id> <url>https://maven.oracle.com</url> </pluginRepository> </pluginRepositories>
-
Commit the POM file to the project's Git repository.
-
-
If you’re the project owner, set up Oracle Maven Repository connections for your project’s team members.
-
Create and configure a job to access Oracle Maven Repository.
See Configure a Job to Connect to the Oracle Maven Repository.
Create and Manage Oracle Maven Repository Connections
If your project users access the Oracle Maven Repository frequently, you can create a pre-defined connection for them. Project users can then configure a job and use the connection to access the artifacts of the Oracle Maven Repository while running builds.
You must be a project owner to add and manage Oracle Maven Repository connections.
To create, edit, and delete a connection, you’ll need the Oracle Technology Network (OTN) Single Sign-On (SSO) credentials of a user who has accepted the Oracle Maven Repository license agreement:
Action | How To |
---|---|
Add an Oracle Maven Repository connection |
|
Edit a connection and change the connection’s user credentials or provide another server ID |
|
Delete the connection |
|
Configure a Job to Connect to the Oracle Maven Repository
Here's how you can set up a job using a predefined connection to connect to the Oracle Maven Repository:
- Open the job’s configuration page.
- Click Configure
.
- Click the Before Build tab.
- Click Add Before Build Action and select Oracle Maven Repository Connection.
- From Use Existing Connection, select a pre-defined connection. Your project owner has created a connection so that you don't have to worry about setting it up.
If there’s no pre-defined connection available or you want set up your own connection, click the toggle button. In OTN Username and OTN Password, enter the credentials of a user who has accepted the Oracle Maven Repository license agreement.
- In Server Id, if required, enter the ID to use for the
<server>
element of the Mavensettings.xml
file, or use the defaultmaven.oracle.com
ID. - If you’re using a custom
settings.xml
file, in Custom settings.xml, enter the file’s path. - Click Save.
Generate a Dependency Vulnerability Analysis Report
You can configure a job to generate a Dependency Vulnerability Analysis (DVA) report for a Maven, Node.js/Javascript, or Gradle application. This report can help you analyze any publicly known vulnerabilities in the application's dependencies.
When a build runs, VB Studio scans the job's POM file (Maven), package.json
file
(Node.js/Javascript), or build.gradle
file (Gradle) and checks the
direct and transitive dependencies against the National Vulnerability Database
(NVD). See https://nvd.nist.gov/
to find more about NVD.
- NPM: Data may be retrieved from the NPM Public Advisories,
https://www.npmjs.com/advisories
. - RetireJS: Data may be retrieved from the RetireJS community,
https://retire.js/github.io/retire.js/
. - Sonatype OSS Index: Data may be retrieved from the Sonatype OSS
Index.
https://sonatype.ossindex.org
.
For any vulnerabilities found, you can configure the job to mark the build as failed or file an issue. If email notifications have been enabled or if a Slack webhook has been configured, you can be notified about these actions through email or Slack.
To configure a job to scan for security vulnerabilities:
- Issue ID, if the Create issue for every affected
file check box was selected. Click the issue link to open it.
You can also open the Project Home page and check the recent activities feed about the issue's create notification. You should see a message that an issue was created, such as System created Defect 2: Vulnerabilities in -MavenJavaApp. If an issue was previously created for the vulnerability, a comment will be added to the issue and a message like System commented Defect 2: Vulnerabilities in - MavenJavaApp will be added to the activities feed.
- Merge request ID, if the Resolve button was clicked to resolve the vulnerabilities. Click the merge request link to open it.
- Number of vulnerabilities
- Name of each dependency where a vulnerability is found
- Each dependency's type (direct or transitive)
A transitive dependency displays a Transitive label next to the name. A direct dependency displays no label.
- Number of alerts and alert categories of vulnerabilities (High, Medium, or Low)
- Expand each dependency to view its vulnerabilities
To mute a vulnerability's alerts, expand the vulnerability in the Report section, and click Mute in Alerts. In the Mute Vulnerability dialog box, review the details, and click Mute. The muted vulnerability won't be reported during the next run and it will not cause the build to fail. It will simply be included in the report as a muted vulnerability that should be used only for reference or to be unmuted and dealt with at some future time.
Muted vulnerabilities will only show up in a report for the latest build, not in reports for any previous builds.
To fix a reported vulnerability, use Resolve and the dropdown menu in the analysis tool to change the dependency's version to one that doesn't have the vulnerability.
Resolve Reported Vulnerabilities Automatically
After the Dependency Vulnerability Analysis (DVA) report for the Maven, Node.js, Javascript, or Gradle application has been generated, review the report to identify the vulnerabilities in the flagged files, and click the Resolve button to resolve them.
The Resolve button simplifies and automates the process for
resolving vulnerabilities found in the direct as well as transitive dependencies of the
application's build file. The Resolve button isn't available in
the DVA reports of older builds of the job. It is only available in the latest build of
the job. The Resolve button is also disabled if a
package.json
file in a Node.js or Javascript application has
vulnerabilities in transitive dependencies only. Transitive dependencies in Node.js and
Javascript applications must be resolved manually, by editing the direct dependencies in
the package.json
file and rerunning the analyzer.
Click the Resolve button to resolve any direct and transitive dependencies that were found:
- In the Report section of the vulnerability analysis report,
expand the affected build file (POM is shown):
- Click Resolve.
If a merge request exists, you can cancel the dialog and use it or continue to create another merge request.
- In the Resolve Vulnerability dialog box, review the reported vulnerabilities.
- If an issue was created when the report was generated, its ID is displayed. If no
issue was created, select the Create issue to track this
resolution check box to create it.
In Linked Builds, add an existing build to link it to the merge request.
In Reviewers, add team members to review the merge request:
- For each vulnerability, in Available Versions, select a
version of the direct dependency or dependency with transitive dependencies that
doesn't have the reported vulnerability.
If you don't want to resolve the dependency or no versions are available, select Do Not Resolve.
- Click Create New Merge Request.
When you click the button, VB Studio does the following:
- Creates a merge request with details about the vulnerabilities found.
- Creates a branch with the job's Git repository branch as the base branch, and then sets it as the review branch of the merge request.
- Sets the job's Git repository branch as the target branch of the merge request.
- Updates the review branch's application build file to use the specified versions of the dependencies.
For example, if the job that generated the vulnerability report uses the
JavaMavenApp
Git repository and itsrelease1.1
branch, then a new branch is created inJavaMavenApp
usingrelease1.1
as the base branch and is used as the review branch of the merge request. Therelease1.1
branch is used as the target branch.If a merge request with same review and target branches was created in an older build of the job, VB Studio uses the same merge request to merge the application build file updates.
- Click the merge request link to open it in another tab or window of the browser, and click OK.
- In the Merge Request, review the details of the vulnerabilities in the
Conversation tab and the application build file changes
(POM is shown) in the Changed Files tab:
- If you've invited other reviewers, wait for their feedback.
- If you've linked a build job to the merge request, in the Linked Builds tab, run a build and verify its stability.
- When you're ready to merge the application build file updates, click Merge.
- In the Merge dialog box, to delete the review branch, select the Delete branch check box. To resolve linked issues, select the Resolve linked issues check box and the check boxes of issues you want to resolve.
- Click Create a merge request.
- Run a build of the job that reported dependency vulnerabilities and verify that the
application build file's update has fixed the vulnerability.
If a vulnerability is still found, repeat the preceding steps to create another merge request after selecting a different dependency version.
Import and Export Oracle Integration Artifacts and Packages Between Environments
When you want to share your code between different Oracle Integration instances, you can use VB Studio's CI/CD capabilities to configure build jobs that export and import Integration artifacts (known as Integrations) and packages (collections of Integrations) from one Oracle Integration instance to another. This capability is useful to promote your code from lower environments to higher ones, typically from a development to a test and finally to a production environment.
Integrations are connections to applications with which you want to share data and are created from the Oracle Integration user interface. Each integration includes dependent artifacts such as lookup tables, JavaScript libraries, and connection types. It does not, however, include connection endpoints or credentials. Integrations can be grouped into collections in a package so, when you import or export the package to or from Oracle Integration, all integrations in that package are imported or exported.
To share your code between different Oracle Integration instances, you'll need to export and then import individual or packaged integrations from your source environment to the target environment—a task that VB Studio can automate for you. You can set up export and import build jobs to move an Integration archive (IAR file) or package (PAR file) from one Oracle Integration instance to another. It's possible to do this with standalone build jobs or as part of a build pipeline.
Step | Description | See this topic |
---|---|---|
Configure a build job to export Integration artifacts from an Oracle Integration instance. | Creates and executes a job to
|
Configure a Job to Export an Integration |
Configure a build job to import Integration artifacts to an Oracle Integration instance. | Creates and executes a job to:
|
Configure a Job to Import an Integration |
Optional: Configure a build job to delete an Integration artifact that's no longer needed or one that causes a conflict. | Creates and executes a job to delete an Integration archive (IAR file) that exists on a particular Oracle Integration instance. | Configure a Job to Delete an Integration |
Configure a build job to export a package of Integration artifacts from an Oracle Integration instance. | Creates and executes a job to
|
Configure a Job to Export an Oracle Integration Package |
Configure a build job to import a package of Integration artifacts to an Oracle Integration instance. | Creates and executes a job to:
|
Configure a Job to Import an Oracle Integration Package |
Optional: Configure a build job to delete a package of Integration artifacts that's no longer needed. | Creates and executes a job to delete a package (PAR file) of integrations on a particular Oracle Integration instance. By default, these integrations will be automatically deactivated before they are deleted. | Configure a Job to Delete an Oracle Integration Package |
Configure a Job to Export an Integration
You can create and execute a job that exports an Integration archive (IAR file) from a source Oracle Integration instance and stores it. You'll need to copy the artifact (IAR) from your export job into your import build job. Optionally, you can set up the job to add recordings to the exported artifact.
Configure a Job to Import an Integration
You can create and execute a job that references a previously exported Integration archive (IAR file) and import it to another Oracle Integration instance. Optionally, you can set up the job to activate the Integration after importing it.
Configure a Job to Delete an Integration
You might want to configure a build job to delete Integration artifacts from an Oracle Integration instance, especially when an existing artifact could cause a conflict.
Configure a Job to Export an Oracle Integration Package
You can create and execute a job that exports an Oracle Integration package (PAR file) of integrations from a source Oracle Integration instance and stores it in a VB Studio repository. Optionally, you can set up the job so the export operation adds artifacts with asserter recordings to the artifact.
Configure a Job to Import an Oracle Integration Package
You can create and execute a job that references a previously exported Integration package (PAR file) and import it to another Oracle Integration instance. Optionally, you can set up the job to include integrations with asserter recordings, if any were written to the exported package.
Configure a Job to Delete an Oracle Integration Package
You might want to configure a build job to delete an Oracle Integration package from an Oracle Integration instance. This action will delete the package and all integrations included in that package.
Run Unix Shell Commands
You can configure a job to run a Unix shell script or execute commands when a build runs:
-
Open the job’s configuration page.
-
Click Configure
.
-
Click the Steps tab.
-
From Add Step, select Common Build Tools, then selectUnix Shell.
-
In Script, enter the shell script or commands.
The script runs and uses the workspace as the current directory. If there is no header line, such as
#!/bin/sh
specified in the shell script, then the system shell will be used. You can also use the header line to write a script in another language, such as Perl (#!/bin/perl
), or control the options that shell uses.You can also use Kubernetes, PSMcli, Docker, Terraform, Packer, and OCIcli commands in the shell script. Make sure that you have the required software in the job’s build executor template before you run a build.
-
To show the values of the variables and hide the input-output redirection in the build log, select the (-x) Expand variables in commands, don’t show I/O redirection option.
To show the command as-it-is in the build log, select the (-v) Show commands exactly as written option.
-
Click Save.
Tip:
-
By default, when a build runs, it invokes the shell with the
-ex
option. It prints all commands before they run. The build will fail if any command exits with a non-zero exit code. To change this behavior, add the#!/bin/...
line in the shell script. -
If you have a long script, create a script file, add it to the Git repository, and then run it using a command, such as
bash -ex /myfolder/myscript.sh
. -
To run Python 3, create an isolated environment using venv. See
https://docs.python.org/3/library/venv.html
.For example, to create a virtual environment, add these commands as a Unix Shell build step:
pip3 list cd $WORKSPACE python3 -m venv mytest cd mytest/bin ./pip3 list ./pip3 install --upgrade pip requests setuptools selenium ./pip3 list ./python3 -c 'import requests; r=requests.get('\''https://www.google.com'\''); print(r.status_code)' ./pip3 uninstall -y requests ./pip3 list
- To provide Python3 capabilities to build jobs, use the Python3 bundles that are included with the build executor template by default. If you need specific capabilities that aren't available by default, you'll need to add the Python3 version that has those capabilities to the build executor template.
-
If both Python 2 and Python 3 are available in the job’s build executor template, to call Python, use these commands:
Command Version python
The
python
command refers to the pre-installed OS-specific Python version:-
Python 2 (OL6, OL7)
-
Python 3 (version 3.6.8 on OL8)
python2
Python 2
python3
The
python3
command refers to the Python 3 version installed with the software bundle.pip
pip
of Python 3pip3
pip
of Python 3 -
-
To clone an external Git repository using a shell command, use the internal URL of the external Git repository. To copy the URL, open the Git page and, from the Repositories drop-down list, select the external Git repository. From the Clone menu, click Copy to clipboard
of the Clone with HTTPS from internal address URL:
If you’re using an Oracle Linux 6 VM build executor in your job, remove the username from the URL before using it in a shell command. For example, if the copied URL is
https://alex.admin@developer.us2.oraclecloud.com/mydomain-usoracle22222/s/developer1111-usoracle22222_myproject/scm/myextrepo.git
, then removealex.admin@
from the URL and usehttps://developer.us.oraclecloud.com/mydomain-usoracle22222/s/mydomain-usoracle22222_myproject/scm/mydomain-usoracle22222_myproject.git
in your shell command.
Use Docker-In-Docker with Shell Scripts
In VB Studio, Docker-in-Docker functionality is implemented using a methodology known as "sibling" containers, which means that a build creates images and containers in the deployment VM's Docker environment. Since multiple Docker executors share the same deployment VM, the images and containers will be shared among builds.
Note:
If your organization's builds use Docker executors and if those builds create Docker images and Docker containers, they'll be managed by the Docker environment in the deployment VM. This allows builds to interact with images and containers from other builds. If your project contains sensitive data and requires its build to run isolated in a VM, you should set up the build using VM executors instead.
Using a simple command, such as docker rm $(docker container -q)
,
in a shell script in a build could have the unintended consequence of
killing containers that were created by other builds. To prevent this from
happening, follow these recommendations to create and remove Docker images
and containers:
- When you create a Docker image, use a unique name, by appending $TASKID to the image name. This distinguishes the image created by the build from a shared image.
- When you create a Docker container, use a unique name, by adding $TASKID to the container label. This distinguishes the container created by the build from a shared container.
- Containers that are created must be scoped to the build. They must be stopped and removed when the build completes.
- Images that are created in a build may be used across many builds, to avoid recreating the image during every build. However, take care to not consume all of the disk space in the deployment VM.
- Don't issue a Docker command, such as
docker rmi $(docker image -q)
, that deletes all Docker images. Instead, use a command, likedocker rmi <my_image>
, that only deletes specific images that were created by the build. - Don't issue Docker commands like
docker stop $(docker ps -q)
anddocker rm $(docker ps -q)
that stop and delete all Docker containers.Instead, use commands like
docker stop <my_container>
anddocker rm <my_container>
that stop and remove specific containers that were created by the build.Here's an example that uses container
some_name_$TASKID
, with "_$TASKID" appended to the name. By using $TASKID with the job name, you can be sure that the container name is specific to your job and won't affect any other job:DOCKER_IMAGE=some_image # Pull and run the container docker pull ${DOCKER_IMAGE} CONTAINER_ID=$(docker run --network=host --name some_name_$TASKID -it -d ${DOCKER_IMAGE}) # Use your container # Stop and remove the container docker stop ${CONTAINER_ID} docker rm ${CONTAINER_ID}
Build Maven Applications
Using Apache Maven, you can automate your build process and download dependencies, as defined in the POM file:
- Upload the Maven POM files to the project Git repository.
- Open the job’s configuration page.
- Click Configure
.
- In the Git tab, add the Git repository where you uploaded the build files.
- Click the Steps tab.
- From Add Step, select Maven.
- In Goals, enter Maven goals, or phases, along with their options. By default,
clean
andinstall
goals are added.For more information about Maven goals, see the Maven Lifecycle Reference documentation at
http://maven.apache.org
. - In POM file, enter the Maven POM file name and path, relative to the workspace root. The default value is pom.xml at the Git repository root.
- If necessary, specify the advanced Maven options:
Action How To Use a private repository for builds Select the Use Private Repository check box. You may want to use it to make sure that other Maven build artifacts don’t interfere with the artifacts of this job’s builds. When a build runs, it creates a Maven repository
.maven/repo
directory in the build executor workspace. Remember selecting this option consumes more storage space of the workspace.Use a private temporary directory for builds. Select the Use Private Temp Directory check box. You may want to use it to create a temporary directory for artifacts or temporary files. When a build runs, it creates a
.maven/tmp
directory in the workspace. The temporary files may consume large amount of storage, so, remember to clean up the directory regularly.Work offline and don’t access remote Maven repositories Select the Offline check box. Activate Maven profiles In Profiles, enter a list of profiles, separated by commas. For more information about Maven profiles, see the Maven documentation at
http://maven.apache.org
.Set custom properties In Properties, enter custom system properties in the key=value
format, specifying each property on its own line. When a build runs, the properties are passed to the build executor in the standard way (example:-Dkey1=value1 -Dkey2=value2
)Set the Maven verbosity level From Verbosity, select the level. You may want to use it to set the verbosity of the Maven log output to the build log.
Set the checksum mode From Checksum, select the mode. You may want to use it to set the check-sum validation strictness when the build downloads artifacts from the remote Maven repositories.
Set handling of the SNAPSHOT artifacts From Snapshot, select the mode. Include other Maven projects to the reactor In Projects, enter the comma or space separated list of Maven project jobs to include in the reactor. The reactor is a mechanism in Maven that handles multi-module projects. A project job can be specified by [groupId]:artifactId
or by its relative path.Resume a Maven project from the reactor In Resume From, enter the Maven job project name from where you would like to resume the reactor. The Maven job project can be specified by [groupId]:artifactId
or by its relative path.Set the failure handling mode From Fail Mode, select the mode. You may want to use it to set how the Maven build proceeds in case of a failure.
Set the Make-like reactor mode From Make Mode, select the mode. You may want to use it enable Make-like build behavior. Configure the reactor threading model In Threading, enter the value for experimental support for parallel builds. For example, a value of 3 indicates three threads for the build. Pass parameters to Java VM In JVM Options, enter the parameters. The build passes the parameters as MAVEN_OPTS
. - Click Save.
Use the WebLogic Maven Plugin
The WebLogic server includes a Maven plugin that you can use to perform various deployment operations against the server, such as deploy, redeploy, and update. The plugin is available in the VB Studio build executor. For more information about how to use the WebLogic Maven plugin, see Fusion Middleware Deploying Applications to Oracle WebLogic Server in Oracle Fusion Middleware Online Documentation Library.
-
Open the job’s configuration page.
-
Click Configure
.
-
Click the Steps tab.
-
From Add Step, select Unix Shell.
-
In Script, enter this command:
mvn install:install-file -Dfile=$WLS_HOME/server/lib/weblogic-maven-plugin.jar -DpomFile=$WLS_HOME/server/lib/pom.xml mvn com.oracle.weblogic:weblogic-maven-plugin:deploy
-
Click Save.
Build Ant Applications
You can use Apache Ant to automate your build processes, as described in its build files:
-
Upload the Ant build files (such as
build.xml
andbuild.properties
) to the project Git repository. -
Open the job’s configuration page.
-
Click Configure
.
-
In the Git tab, add the Git repository where you uploaded the build files.
-
Click the Steps tab.
-
From Add Step, select Ant.
-
In Targets, specify the Ant targets or leave it empty to run the default Ant target specified in the build file.
-
In Build File, specify the path of the build file.
-
If necessary, in Properties, specify the values for properties used in the Ant build file:
# comment name1=value1 name2=$VAR2
When a build runs, these values will be passed to Ant as
-Dname1=value1 -Dname2=value2
. You should always use$VAR
for parameter references instead of using%VAR%
). Use a double backslash (\\
) to escape a backslash (\). Avoid using double-quotes ("). To define an empty property, usevarname=
in the script. -
If your build requires a custom
ANT_OPTS
, specify it in Java Options. You may use it to specify Java memory limits (example:-Xmx512m
). Don’t specify other Ant options here (such as-lib
), but specify them in Targets. -
Click Save.
For more information, see https://ant.apache.org/
.
Build Gradle Applications
Using Gradle, you can automate your build processes as defined in its build script. For more information about Gradle, see https://gradle.org/
.
https://docs.gradle.org/current/userguide/gradle_wrapper.html
.
Set Up a VM Build Executor and a Build Executor Template with Gradle
Note:
To find your organization administrator, click Contacts under your user profile. Your administrator, or a list of administrators, will display.See Create and Manage Build Executor Templates in Administering Visual Builder Studio.
After the organization administrator adds a VM build executor to the build executor template, you can create and configure a job to use that build executor template and add Gradle commands.
Build Node.js Applications
Using Node.js, you can develop applications that run JavaScript on a server. For more information, see https://nodejs.org
.
Set Up a VM Build Executor and a Build Executor Template with Node.js
Note:
To find your organization administrator, click Contacts under your user profile. Your administrator, or a list of administrators, will display.See Create and Manage Build Executor Templates in Administering Visual Builder Studio.
After the organization administrator adds a VM build executor to the build executor template, you can create and configure a job to use that build executor template and add a Node.js script.
Access an Oracle Database Using SQLcl
Using SQLcl, you can run SQL statements from a build to connect and access an Oracle Database. You can use SQLcl to access any publicly available Oracle Database that you can connect to using a JDBC connect string. You can run DML, DDL, and SQL Plus statements. You can also use SQLcl in a test scenario and run SQL scripts to initialize seed data or validate database changes.
SQLcl requires Java SE 1.8 or later. To learn more about SQLcl, see http://www.oracle.com/technetwork/developer-tools/sqlcl/overview/index.html
. Also see Using the help command in SQLcl in Using Oracle Database Exadata Express Cloud
Service and the SQL Developer Command-Line Quick Reference documentation to know more about using SQLcl supported commands.
To connect to Oracle Database Exadata Express Cloud Service, download the ZIP file that contains its credentials and upload it to the job’s Git repository. You can download the ZIP file from the Oracle Database Cloud Service service console. See Downloading Client Credentials in Using Oracle Database Exadata Express Cloud Service.
Set Up a VM Build Executor and a Build Executor Template with SQLcl
Note:
To find your organization administrator, click Contacts under your user profile. Your administrator, or a list of administrators, will display.See Create and Manage Build Executor Templates in Administering Visual Builder Studio.
After the organization administrator adds a VM build executor to the build executor template, you can create and configure a job to use that build executor template and add SQLcl commands.
Configure a Job to Run SQLcl Commands
- VB Studio doesn’t support SQL commands to edit buffer (such as
set sqlformat csv
) or edit console. - VB Studio doesn’t support build parameters in the SQL file.
- If you are using Oracle REST Data Services (ORDS),
some SQLcl commands, such as the BRIDGE command, requires a
JDBC URL:
BRIDGE table1 as "jdbc:oracle:thin:DEMO/demo@http://examplehost.com/ords/demo"(select * from DUAL);
- To mark a build as failed if the SQL commands fail,
add the
WHENEVER SQLERROR EXIT 1
line to your script.
Here's how you create and configure a job that runs SQLcl commands:
When a build runs, VB Studio stores your Oracle Database credentials in the Oracle Wallet. Check the build’s log for the SQL output or errors.
Run Oracle PaaS Service Manager Commands Using PSMcli
Using Oracle PaaS Service Manager command line interface (PSMcli) commands, you can create and manage the lifecycle of various services in Oracle Public Cloud. You can create service instances, start or stop instances, or remove instances when a build runs.
For more information about PSMcli and its commands, see About the PaaS Service Manager Command Line Interface in PaaS Service Manager Command Line Interface Reference.
Set Up a VM Build Executor and a Build Executor Template with PSMcli
Note:
To find your organization administrator, click Contacts under your user profile. Your administrator, or a list of administrators, will display.See Create and Manage Build Executor Templates in Administering Visual Builder Studio.
After the organization administrator adds a VM build executor to the build executor template, you can create and configure a job to use that build executor template and add PSMcli commands.
Use OCIcli to Access Oracle Cloud Infrastructure Services
You can use Oracle Cloud Infrastructure command line interface (OCIcli) commands to create and manage Oracle Cloud Infrastructure objects and services when a build runs.
- The User OCID
- A private key that has been set with no passphrase
Note:
You shouldn't use a passphrase for OCI public/private keys in an OCIcli build step. If you do, when the build job encounters the key you'll be prompted for the passphrase, but, since you can't interact with the job's shell to supply it, the build will fail and an error will be reported in the build job's log. To avoid this problem, you'll need to generate a public-private key pair without a passphrase and upload the public key to your user preferences.
See Upload Your Public SSH Key for information about generating an SSH key and uploading the public SSH key to your VB Studio account.
- The fingerprint of a user who can create and access the resources
- The tenancy name
Set Up a VM Build Executor and a Build Executor Template with OCIcli
Note:
To find your organization administrator, click Contacts under your user profile. Your administrator, or a list of administrators, will display.See Create and Manage Build Executor Templates in Administering Visual Builder Studio.
After the organization administrator adds a VM build executor to the build executor template, you can create and configure a job to use that build executor template and add OCIcli commands.
Run Docker Commands
You can configure a job to run Docker commands on a Docker container when a build runs.
You should use the Docker container for short tests and builds. Don’t run a Docker container for long tests or builds, or the builds might not finish. For example, if you use a Docker image that’s listening on a certain port and behaves like a web server, most likely the build won’t exit.
For more information about Docker commands, see https://docs.docker.com/
.
Tip:
If you face a network issue when you run Docker commands, try adding theHTTP_PROXY
and
HTTPS_PROXY
environment variables in the Docker file.
Set Up a VM Build Executor and a Build Executor Template with Docker
Note:
To find your organization administrator, click Contacts under your user profile. Your administrator, or a list of administrators, will display.See Create and Manage Build Executor Templates in Administering Visual Builder Studio.
After the organization administrator adds a VM build executor to the build executor template, you can create and configure a job to use that build executor template and add Docker commands.
Run Fn Commands
Fn, or Fn Project, is an open-source, container-native, serverless platform for building, deploying, and scaling functions in multi-cloud environments. To run Fn commands when a build runs, you must have access to a Docker container that has a running Fn server.
For more information about Fn, see https://fnproject.io/
.
Set Up a VM Build Executor and a Build Executor Template with Fn
Note:
To find your organization administrator, click Contacts under your user profile. Your administrator, or a list of administrators, will display.See Create and Manage Build Executor Templates in Administering Visual Builder Studio.
After your organization administrator adds a VM build executor to the build executor template, you can create and configure a job to use that buiold executor template and add Fn commands.
Use SonarQube
SonarQube is open source quality management software that continuously analyzes your application. When you configure a job to use SonarQube, the build generates an analysis summary that you can view from the job or build details page.
To learn about SonarQube, see its documentation at https://docs.sonarqube.org
.
Create and Manage the Pre-Defined SonarQube Server Connection
You must be the project owner to add and manage SonarQube server connections.
Here's how you can set up a SonarQube system for your project's users and then create and manage a pre-defined SonarQube connection that they can use:
Action | How To |
---|---|
Add a SonarQube connection |
|
Edit a connection to change the user credentials or provide another server ID |
|
Delete the connection |
|
Configure a Job to Connect to SonarQube
You can configure a job to use SonarQube from the Before Build tab and then add a post-build action (on the After Build tab) to publish its reports:
To view the SonarQube analysis summary after a build, from the job’s details page,
click SonarQube Analysis Summary
. The SonarQube Analysis Summary displays SonarQube server URL for the job
and the analysis summary.
Create a Sonarqube Analysis Report for a VB Studio Project with Javascript Sources
The VB Studio build system supports Sonarqube analysis for Java using Maven and Gradle during building and packaging. This is for Java apps, not visual applications. It doesn't provide built-in support for analyzing Javascript sources. If you need to perform a Sonarqube analysis for Javascript sources, such as those created by VB Studio, you'll need to create your own Sonarqube analysis report by using the Unix Shell builder and then uploading the results to Sonarqube.
- If the project being built is a Maven project, you'll need to direct the Sonar Scanner Maven plugin to include the JS files for analysis.
- If the project is a VB Studio project that is purely Javascript, you'll need to install and use the sonar-scanner command line tool to do the analysis.
Analyze Javascript Sources in a Maven Project
If the project being built is a Maven project, by default, the Sonar
Scanner Maven plugin will include only the Java sources from
src/main/java
. You'll need to make sure the plugin also
includes the Javascript files for analysis:
-
Use the
-Dsonar.source
parameter on the command line to explicitly include the path to the Javascript files, as shown in this example:mvn clean install sonar:sonar -Dsonar.host.url=$SONAR_URL -Dsonar.login=$SONAR_LOGIN -Dsonar.password=$SONAR_PASSWD -Dsonar.sources=src/main/java,src/main/webapp -Dsonar.projectName=$SONAR_PROJECT_NAME -Dsonar.projectKey=$SONAR_PROJECT_KEY -f UiServer/pom.xml
In the example,
-Dsonar.sources=src/main/java,src/main/webapp
is used to explicitly add Java sources fromsrc/main/java
and Javascript sources fromsrc/main/webapp
. -
The log will show that the Javascript sources were analyzed, as were HTML and CSS files:
[2021-04-01 21:31:30] [INFO] Sensor CSS Metrics [cssfamily] [2021-04-01 21:31:30] [INFO] Sensor CSS Metrics [cssfamily] (done) | time=29ms [2021-04-01 21:31:30] [INFO] Sensor CSS Rules [cssfamily] [2021-04-01 21:31:31] [INFO] 12 source files to be analyzed [2021-04-01 21:31:31] [INFO] 12/12 source files have been analyzed [2021-04-01 21:31:31] [INFO] Sensor CSS Rules [cssfamily] (done) | time=1446ms [2021-04-01 21:31:31] [INFO] Sensor JavaScript analysis [javascript] [2021-04-01 21:31:34] [INFO] 13 source files to be analyzed [2021-04-01 21:31:36] [INFO] 13/13 source files have been analyzed [2021-04-01 21:31:36] [INFO] Sensor JavaScript analysis [javascript] (done) | time=4971ms [2021-04-01 21:31:36] [INFO] Sensor HTML [web] [2021-04-01 21:31:36] [INFO] Sensor HTML [web] (done) | time=137ms
Analyze a VB Studio Project That Contains Javascript Sources Only
For a VB studio project that contains just Javascript sources, you can create a Unix Shell step that downloads and installs the sonar-scanner command line tool, then uses it to perform the analysis:
-
Open the job’s configuration page.
-
Click the Steps tab.
-
From Add Step, select Unix Shell.
-
In Script, enter the following commands:
-
Download the sonar-scanner command line tool from SonarQube website:
curl -o sonar-scanner-cli-4.6.0.2311-linux.zip https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-4.6.0.2311-linux.zip
-
Unzip the tool:
unzip sonar-scanner-cli-4.6.0.2311-linux.zip
-
Run the scanner to perform the analysis, after explicitly specifying which Javascript sources you want it to analyze, as in
-Dsonar.sources=UiServer/src/main/webapp
::sonar-scanner-4.6.0.2311-linux/bin/sonar-scanner -Dsonar.host.url=$SONAR_URL -Dsonar.login=$SONAR_LOGIN -Dsonar.password=$SONAR_PASSWD -Dsonar.sources=UiServer/src/main/webapp -Dsonar.projectName=$SONAR_PROJECT_NAME -Dsonar.projectKey=$SONAR_PROJECT_KEY
- Click Save.
-
-
Run the build and check the build log to make sure that the analysis was successful:
[2021-04-01 22:12:20] INFO: ------------- Run sensors on module Project1.Sonar_8_8_sonar_scanner [2021-04-01 22:12:20] INFO: Load metrics repository [2021-04-01 22:12:20] INFO: Load metrics repository (done) | time=486ms [2021-04-01 22:12:22] INFO: Sensor CSS Metrics [cssfamily] [2021-04-01 22:12:22] INFO: Sensor CSS Metrics [cssfamily] (done) | time=50ms [2021-04-01 22:12:22] INFO: Sensor CSS Rules [cssfamily] [2021-04-01 22:12:23] INFO: 12 source files to be analyzed [2021-04-01 22:12:23] INFO: 12/12 source files have been analyzed [2021-04-01 22:12:23] INFO: Sensor CSS Rules [cssfamily] (done) | time=1292ms [2021-04-01 22:12:23] INFO: Sensor JaCoCo XML Report Importer [jacoco] [2021-04-01 22:12:23] INFO: 'sonar.coverage.jacoco.xmlReportPaths' is not defined. Using default locations: target/site/jacoco/jacoco.xml,target/site/jacoco-it/jacoco.xml,build/reports/jacoco/test/jacocoTestReport.xml [2021-04-01 22:12:23] INFO: No report imported, no coverage information will be imported by JaCoCo XML Report Importer [2021-04-01 22:12:23] INFO: Sensor JaCoCo XML Report Importer [jacoco] (done) | time=3ms [2021-04-01 22:12:23] INFO: Sensor JavaScript analysis [javascript] [2021-04-01 22:12:26] INFO: 13 source files to be analyzed [2021-04-01 22:12:28] INFO: 13/13 source files have been analyzed [2021-04-01 22:12:28] INFO: Sensor JavaScript analysis [javascript] (done) | time=4827ms [2021-04-01 22:12:28] INFO: Sensor C# Project Type Information [csharp] [2021-04-01 22:12:28] INFO: Sensor C# Project Type Information [csharp] (done) | time=1ms [2021-04-01 22:12:28] INFO: Sensor C# Properties [csharp] [2021-04-01 22:12:28] INFO: Sensor C# Properties [csharp] (done) | time=0ms [2021-04-01 22:12:28] INFO: Sensor JavaXmlSensor [java] [2021-04-01 22:12:28] INFO: Sensor JavaXmlSensor [java] (done) | time=1ms [2021-04-01 22:12:28] INFO: Sensor HTML [web] [2021-04-01 22:12:28] INFO: Sensor HTML [web] (done) | time=151ms [2021-04-01 22:12:28] INFO: Sensor VB.NET Project Type Information [vbnet] [2021-04-01 22:12:28] INFO: Sensor VB.NET Project Type Information [vbnet] (done) | time=1ms [2021-04-01 22:12:28] INFO: Sensor VB.NET Properties [vbnet] [2021-04-01 22:12:28] INFO: Sensor VB.NET Properties [vbnet] (done) | time=1ms [2021-04-01 22:12:28] INFO: ------------- Run sensors on project [2021-04-01 22:12:28] INFO: Sensor Zero Coverage Sensor [2021-04-01 22:12:28] INFO: Sensor Zero Coverage Sensor (done) | time=20ms [2021-04-01 22:12:28] INFO: SCM Publisher SCM provider for this project is: git [2021-04-01 22:12:28] INFO: SCM Publisher 25 source files to be analyzed [2021-04-01 22:12:29] INFO: SCM Publisher 25/25 source files have been analyzed (done) | time=223ms [2021-04-01 22:12:29] INFO: CPD Executor 5 files had no CPD blocks [2021-04-01 22:12:29] INFO: CPD Executor Calculating CPD for 15 files [2021-04-01 22:12:29] INFO: CPD Executor CPD calculation finished (done) | time=49ms [2021-04-01 22:12:29] INFO: Analysis report generated in 165ms, dir size=202 KB [2021-04-01 22:12:29] INFO: Analysis report compressed in 86ms, zip size=78 KB [2021-04-01 22:12:30] INFO: Analysis report uploaded in 599ms [2021-04-01 22:12:30] INFO: ANALYSIS SUCCESSFUL, you can browse http://server123.mycorp.com:9000/dashboard?id=qa-dev_project1_1.Sonar_8_8_sonar_scanner [2021-04-01 22:12:30] INFO: Note that you will be able to access the updated dashboard once the server has processed the submitted analysis report [2021-04-01 22:12:30] INFO: More about the report processing at http://server123.mycorp.com:9000/api/ce/task?id=AXiPfrh9RniEjxk9KTc9 [2021-04-01 22:12:40] INFO: Analysis total time: 32.061 s
Publish JUnit Results
JUnit test reports provide useful information about test results, such as historical test result trends, failure tracking, and so on.
If you use JUnit to run your application's test scripts, you can configure your job to publish JUnit test reports:
-
Upload your application with test script files to the Git repository.
-
Open the job’s configuration page.
-
Click the After Build tab.
-
From Add After Build Action, select JUnit Publisher.
-
In Include JUnit XMLs, specify the path and names of XML files to include. You can use wildcards to specify multiple files:
- If you’re using Ant, you could specify the path as
**/build/test-reports/*.xml
. - If you’re using Maven, you could specify the path as
target/surefire-reports/*.xml
.
If you use this pattern, make sure that you don’t include any non-report files.
- If you’re using Ant, you could specify the path as
-
In Exclude JUnit XMLs, specify the path and names of XML report files to exclude. You can use wildcards to specify multiple files.
-
To see and retain the standard output and errors in the build log, select the Retain long standard output/error check box.
If you don’t select the check box, the build log is saved, but the build executor truncates it to save space. If you select the check box, every log message is saved, but this might increase memory consumption and can slow the performance of the build executor.
-
To combine all test results into a single table of results, select the Organize test output by parent location check box.
If you use multiple browsers, the build executor will categorize the results by browser.
-
To mark the build as failed when JUnit tests fail, select the Fail the build on fail tests check box.
-
To archive videos and image files, select the Archive Media Files check box.
-
Click Save.
After a build runs, you can view its test results.
View a Build's JUnit Test Results
You can view a build's JUnit test results from the Test Results page:
Action | How To |
---|---|
View test results of the last build |
|
View test results of a particular build |
|
View test suite details |
On the Test Results page, click the All Tests toggle button. From the Suite Name, click the suite name. |
View details of a test |
Open the test suite details page and click the test name. To view details of a failed test, on the Test Results page, click the All Failed Tests toggle button, and then click the test name. |
View test results history |
On the Test Results page, click View Test Results History. |
If you configure the job to archive videos and image files, click
Show
to download the test image and click Watch
to download the test video file.
The supported image formats are .png
, .jpg
, .gif
, .tif
, .tiff
, .bmp
, .ai
, .psd
, .svg
, .img
, .jpeg
, .ico
, .eps
, and .ps
.
The supported video formats are .mp4
, .mov
, .avi
, .webm
, .flv
, .mpg
, .gif
, .wmv
, .rm
, .asf
, .swf
, .avchd
, and .m4v
.
Use the Xvfb Wrapper
Xvfb is an X server that implements the X11 display server protocol and can run on machines that don't have physical input devices or a display.
Set Up a VM Build Executor and a Build Executor Template with Xvfb
Note:
To find your organization administrator, click Contacts under your user profile. Your administrator, or a list of administrators, will display.See Create and Manage Build Executor Templates in Administering Visual Builder Studio.
After the organization administrator adds a VM build executor to the build executor template, you can create and configure a job to use that build executor template and Xvfb.
Publish Javadoc
If your application source code files are configured to generate Javadoc, you can configure a job to publish Javadocs when a build runs:
Archive Artifacts
Archived artifacts can be downloaded manually and then deployed. By default, build artifacts are kept as long as the build log is.
If you want a job's builds to archive artifacts, you can do so as an after build action:
-
Open the job’s configuration page.
-
Click Configure
.
-
Click the After Build tab.
-
Click Add After Build Action and select Artifact Archiver.
-
In Files to archive, enter a comma-separated list of file paths, such as
env/,SQL/,target/
, using the path relative to the workspace, not the full file path.You can use wildcards to archive multiple files. For example, you could use
env/**
to archive all files in all subdirectories of theenv
directory. Or, you could useenv/**/*.bin
to archive all files that end with the.bin
extension in all subdirectories of theenv
directory.Here are some more examples:
**/*
or**
archives all files in all directories and subdirectories**/*.sql
archives all files that have a.sql
file extension, in all directories and subdirectoriesenv/*
matches all files in theenv
folder itself, but doesn't include any files in any subdirectories
The patterns can be more complex too. For example, you could use
**/target/*.jar
to archive.jar
files in alltarget
directories your workspace. -
In Files to exclude, enter a comma-separated list of files, including the path, as described in the previous step.
A file that matches the exclude pattern won’t be archived even if it matches the pattern specified in Files to archive.
-
If your application is a Maven application and you want to archive Maven artifacts, select Archive Maven Artifacts.
To archive the Maven POM file along with the Maven artifacts, select Include POM.xml.
-
Click Save.
Discard Old Builds and Artifacts
To save storage space, you can configure a job to discard its old builds and artifacts:
-
Open the job’s configuration page.
-
Click Settings
.
-
Click the General tab, if necessary.
-
If not selected, select Discard Old Builds.
-
Configure the discard options.
-
Click Save.
Old builds will be discarded after you save the job configuration and after a job has been built.
Copy Artifacts from Another Job
If your application depends on artifacts from another job, you can configure the job to copy those artifacts when a build is run:
-
Open the job’s configuration page.
-
Click Configure
.
-
Click the Before Build tab.
-
Click Add Before Build Action and select Copy Artifacts.
-
In From Job, select the job whose artifacts you want to copy.
-
In Which Build, select the build that generated the artifacts.
-
If you select the Use last successful build if not run in pipeline option, the last successful (other) build will be used if this build isn't run in a pipeline.
The build will fail if you don't select the option but do select the upstream build in the previous step and don't run the build in a pipeline.
-
In Artifacts to copy, specify the artifacts to copy. When a build runs, the artifacts are copied with their relative paths.
If you don't specify a value, the build will copy all artifacts. The
archive.zip
file is never copied. -
In Target Directory, specify the workspace directory where the artifacts will be copied.
-
To flatten the directory structure of the copied artifacts, select Flatten Directories.
-
By default, if a build can’t copy artifacts, it'll be marked as failed. If you don’t want the build to be marked as failed, select Optional (Do not fail build if artifacts copy failed).
-
Click Save.
Configure General and Advanced Job Settings
You can configure several general and advanced job settings, such as name and description, the Java version used in the build, discarding old and running concurrent builds, adding timestamps to the build log, and more:
Action | How To |
---|---|
Update the job’s name and description |
|
Check the software available on the job’s build executor template |
|
Run concurrent builds |
|
Set a quiet period |
|
Set a retry count |
|
Abort a build if it’s stuck for some duration |
|
Remove timestamps from the build log |
|
Set the maximum size of the console log |
|
Manage Build Actions
You can manage build actions in job configurations, including disabling/enabling, reordering, or removing build actions. These operations apply to build actions on the Git, Parameters, Before Build, Steps, and After Build tabs (under Configure) and build actions on the Triggers tab (under Settings).
Here are the job configuration build actions that you can manage:
Action | How To |
---|---|
Disable a build action |
In any tab on the Job Configuration page, for any enabled build action, change the toggle from Enabled to Disabled and click Save. Use this toggle to disable the build step or action temporarily. If a step or action is disabled, it'll be skipped when the job is run. If you see a validation error while trying to save a job configuration after adding, then disabling, a new build action, make sure that you filled out all required fields. Required fields are still required, even though the build action is disabled. You must either fill out the required field(s) in the disabled build action or remove the build action before trying to resave the job configuration. |
Enable a disabled build action |
In any tab on the Job Configuration page, for any disabled build action, change the toggle from Disabled to Enabled and click Save. |
Reorder build actions |
In any tab on the Job Configuration page that has multiple build actions, drag and drop any build action to rearrange the order and click Save. |
Remove a build action |
In any tab on the Job Configuration page, for any
enabled or disabled build action, click
Remove
|
Change a Job’s Build Executor Template
Contact the organization administrator to create a build executor template and add software bundles.
The organization administrator creates the build executor template, selects and adds software bundles to it, then creates an instance of the VM Build Executor. You specify the template when you create or configure your job. Then, when you run the job, the software that is specified in the build executor template is installed on the VM build executor.
Note:
To find your organization administrator, click Contacts under your user profile. Your administrator, or a list of administrators, will display.See Create and Manage Build Executor Templates in Administering Visual Builder Studio.
Here's how you can change your job’s build executor template after you create the job:
-
Open the job’s configuration page.
-
Click Settings
.
-
Click the Software tab.
-
Select the build executor template that you want to use for your builds.
-
Click Save.
Run a Build
You can run a job’s build manually or configure the job to trigger it automatically on an SCM commit or according to a schedule:
Action | How To |
---|---|
Run a build manually |
Open the job’s details page and click Build Now. You can also run a job’s build from the Jobs Overview page. In the jobs table, click
Build Now
|
Run a build on SCM commit |
|
Run a build on a schedule |
A job that takes more than eight hours to build will fail. If you know that a job's processes will take more than eight hours to execute, you should distribute those processes in multiple jobs and run them together in a pipeline.
View a Job’s Builds and Reports
From the Builds page, click a job name to open its details page, from which you can view a job’s builds, reports, and build history, or perform actions such as running a build or configuring the job.
View a Build’s Logs and Reports
A build generates various types of reports and logs, such as SCM Changes, test results, and action history. You can open these reports from the Job Details page or the Build Details page. On the Job Details page or the Build Details page, click the report icon to view its details.
Here are the types of reports that are generated by a build:
Log/Report | Description |
---|---|
Changes
|
View all files that have changed in the build. When a build is triggered, the build system checks the job’s Git repositories for any changes to the SCM. If there are any updates, the SCM Change log displays the files that were added, edited or removed. |
Artifacts![]() |
View the latest archived artifacts generated by the build. |
Javadoc |
View the build's Javadoc output. The report is available only if the application’s build generated Javadoc. |
Tests
|
View the log of build’s JUnit test results. To open the Test Suite details page, on the Test Results page, click the All Tests toggle button and click the suite name in the Suite Name column. To view details of a test, on the Test Results page, click the All Failed Tests toggle button and then click the test name link in the Test Name column. You can also click the All Tests toggle button, open the test suite details page, and then click the test name link in the Test Name column. |
Build Log
|
View the last build’s log. In the log page, review the build log. If the log is displayed partially, click the Full Log link to view the entire log. To download the log as a text file, click the Download Console Output link. |
Git Log
|
View the Git SCM polling log of the builds that displays the log of builds triggered by SCM polling. The log includes scheduled uilds and builds triggered by SCM updates. In the Job Details page of a job, click Latest SCM Poll Log |
Audit
|
View the Audit log of user actions. You can use the Audit log to track the user actions on a build. Use the log to see who performed particular actions on the job. For example, you can see who canceled a build of the job, or who disabled the job and when was it disabled. |
SonarQube
|
View the SonarQube analysis report of the job. |
Vulnerabilities![]() |
View the Security Vulnerabilities report that identifies direct and transitive dependencies in the job's Maven, Node.js, Javascript, and/or Gradle projects. |
View a Project’s Build History
The Recent Build History page displays builds of all the project's jobs.

Tip:
To sort the table data by a column, right-click inside the build history table column and select the sort order from the Sort context menu.View a Job’s Build History
A job’s build history can be viewed in the Build history section of the Job Details page. It displays the status of running builds, and completed job builds in descending order (most recent first) along with their build numbers, date and time, and a link to the build's console output.
The build history shows how the build was triggered as well as its status, build
number, and date-time stamp. In this view, you can click Console
icon to open the build’s console and Delete
to delete the build.
When you review the build history, take note of these things:
-
In the By column, the icons indicate the following:
This icon ... Indicates: User The build was initiated by a user. SCM Change The build was triggered by an SCM change. Pipeline The build was initiated by a pipeline. Click to open the build’s pipeline instance. Periodic Build Trigger The build was triggered by a periodic build trigger. Build System The build was started or rescheduled by the build system. -
In the Build column, an * in the build number indicates the build is annotated with a description. Mouse over the build number to see the description.
-
The list doesn’t show discarded and deleted jobs.
-
If a running build remains stuck in the queued state for a long time, you can mouse over the
Queued
status to display a message about the problem.If the build uses a VM build executor, you can contact the organization administrator to check its status.
-
To sort the table data in ascending or descending order, click the header column name and then click the Previous or Next icon in the column header.
As an alternative, you can right-click inside the table column and select the sort order from the Sort context menu.
-
Only project members can delete builds. Non-members cannot.
View a Job’s User Action History
You can use the Audit log to track a job’s user actions. For example, you can see who canceled a build of the job, or who disabled the job and when it was disabled.

The log displays information about these user actions:
-
Who created the job
-
Who started a build or how a build was triggered (followed by the build number), when the build succeeded or failed, and the duration of the build
A build can also be triggered by a timer, a commit to a Git repository, or an upstream job.
-
Who aborted a build
-
Who changed the configuration of the job
-
Who disabled a job
-
Who enabled a job
View a Build’s Details
A build’s details page shows its status, links to open build reports, download artifacts, and logs. To open the a build’s details page, click the build number in the Build History.
You can perform these common actions from a build’s details page:
Action | How To |
---|---|
Keep a build forever |
A build that’s marked as ‘forever’ isn’t removed if a job is configured to discard old builds automatically. You can’t delete it either. To keep a build forever, click Configure, select the Keep Build Forever check box, and click Save. |
Add a name and description to a build |
Adding a description and a name is especially helpful if you mark a particular build to keep it forever and not get discarded automatically. When you add a description to a build, an * is added to the build number in the Build History table. To keep a build forever, click Configure. In Name and Description, enter the details, and click Save. |
Open a build’s log |
Click Build Log. |
Delete a build |
Click Delete. |
Download Build Artifacts
Build artifacts are displayed in a directory tree structure. You can click the link to download parts of the tree, including individual files, directories, and subdirectories.
If the job is configured to archive artifacts, you can download them to your computer and then deploy to your web server:
-
Open the job’s details page.
-
Click Artifacts
.
To download artifacts of a particular build, in the Build History, click the build number, and then click Artifacts
.
-
Expand the directory structure and click the artifact link (file or directory) to download it.
To download a zip file of all artifacts, click
(All files in zip).
-
Save the file to your computer.
Watch a Job
You can subscribe to email notifications that you'll receive when a build of a job succeeds or fails.
To get email notifications, enable them in your user preferences, and then set up a watch on the job:
Action | How To |
---|---|
Enable your email notifications preference |
In your user preferences page, select the Build Activities check box. |
Watch a job |
|
Disable email notifications of the job to all subscribed members |
|
Build Executor Environment Variables
When you run a build job, you can use environment variables in your shell scripts and commands to access software on the VM build executor.
To use a variable, use the $VARIABLE_NAME
syntax, such as
$BUILD_ID
.
Common Environment Variables
Here are some common environment variables:
Environment Variable | Description |
---|---|
BUILD_ID | The current build’s ID. |
BUILD_NUMBER | The current build number. |
BUILD_DIR | The build output directory. |
JOB_NAME | The name of the job. |
HTTP_PROXY | The HTTP proxy for outgoing connections. |
HTTP_PROXY_HOST | The HTTP proxy host for outgoing connections. |
HTTP_PROXY_PORT | The HTTP proxy port for outgoing connections. |
HTTPS_PROXY | The HTTPS proxy for outgoing connections. |
HTTPS_PROXY_PORT | The HTTPS proxy host for outgoing connections. |
HTTPS_PROXY_PORT | The HTTPS proxy port for outgoing connections. |
JOB_NAME | The name of the current job. |
NO_PROXY | A comma separated list of domain names or IP addresses for which the proxy should not be used. You can also specify port numbers. |
NO_PROXY_ALT | A pipe ( | ) separated list of domain names or IP addresses for which the proxy should not be used. You can also specify port numbers. |
PATH | The PATH variable, set in the VM build executor, specifies the
path of executables in the VM build executor.
Executables from the software bundles are available on the VM
build executor's PATH variable, which is set
to See Software for Build Executor Templates in Administering Visual Builder Studio for more information. |
WORKSPACE | The absolute path of the VM build executor's workspace. |
Software Environment Variables
Environment Variable | Description |
---|---|
GRADLE_HOME | The path of the Gradle directory. |
JAVA_HOME | The path of the directory where the Java Development Kit (JDK) or the Java
Runtime Environment (JRE) is installed.
If your job is
configured to use a specific Java version, the build executor
sets the variable to the path of the specified Java version.
When the variable is set, PATH is also
updated to have
$JAVA_HOME |
NODE_PATH | The path of the Node.js modules directory. |
Tip:
-
You can run the
env
command as a Shell build step to view all environment variables of the build executor. -
Some Linux programs, such as curl, only support lower-case environment variables. Change the build steps in your job configuration to use lower-case environment variables:
export http_proxy="$HTTP_PROXY" export https_proxy="$HTTPS_PROXY" export no_proxy="$NO_PROXY" curl -v http://www.google.com
Software Environment Variables for SOA
To access SOA, use these environment variables that are defined for you when you include a SOA bundle in your template:
- Use JAVACLOUD_HOME variables to access the Java SDK
- Use MIDDLEWARE_HOME variables to access Oracle Fusion
Middleware.
The
MIDDLEWARE_HOME
directory includes the WebLogic Server installation directory and the Oracle Common library dependencies. - Use WLS_HOME variables to access the WebLogic server binary directory
Make sure that you have the right software available in your job's build executor template:
Software | Variables |
---|---|
SOA 12.2.1.4 | JAVACLOUD_HOME_SOA=/opt/Oracle/MiddlewareSOA_12.2.1.4.0/jdeveloper/cloud/oracle-javacloud-sdk/lib JAVACLOUD_HOME_SOA_12_2_1= MIDDLEWARE_HOME_SOA= MIDDLEWARE_HOME_SOA_12_2_1= ORACLE_HOME= ORACLE_HOME_SOA= ORACLE_HOME_SOA_12_2_1= WLS_HOME_SOA= WLS_HOME_SOA_12_2_1= |
SOA 12.2.1.3 | JAVACLOUD_HOME_SOA=/opt/Oracle/MiddlewareSOA_12.2.1.3.0/jdeveloper/cloud/oracle-javacloud-sdk/lib JAVACLOUD_HOME_SOA_12_2_1= MIDDLEWARE_HOME_SOA= MIDDLEWARE_HOME_SOA_12_2_1= ORACLE_HOME= ORACLE_HOME_SOA= ORACLE_HOME_SOA_12_2_1= WLS_HOME_SOA= WLS_HOME_SOA_12_2_1= |
Software Environment Variables for Oracle JDeveloper
To access Oracle JDeveloper, use these environment variables that are defined for you when you include a JDeveloper bundle in your template:
- Use JAVACLOUD_HOME variables to access the Java SDK
- Use MIDDLEWARE_HOME variables to access Oracle Fusion
Middleware.
The
MIDDLEWARE_HOME
directory includes the WebLogic Server installation directory and the Oracle Common library dependencies. - Use WLS_HOME variables to access the WebLogic server binary directory.
Make sure that you have the right software available in your job's build executor template:
Software | Variables |
---|---|
JDeveloper 12.2.1.4 | JAVACLOUD_HOME=/opt/Oracle/Middleware_12.2.1.4.0/jdeveloper/cloud/oracle-javacloud-sdk/lib MIDDLEWARE_HOME= ORACLE_HOME= WLS_HOME= |
JDeveloper 12.2.1.3 | JAVACLOUD_HOME=/opt/Oracle/Middleware_12.2.1.3.0/jdeveloper/cloud/oracle-javacloud-sdk/lib MIDDLEWARE_HOME= ORACLE_HOME= WLS_HOME= |
JDeveloper 11 | JAVACLOUD_HOME=/opt/Oracle/Middleware/jdeveloper/cloud/oracle-javacloud-sdk/lib MIDDLEWARE_HOME= ORACLE_HOME= WLS_HOME= |
Run Jobs in a Pipeline
You can create, manage, and configure job pipelines from the Pipelines tab of the Builds page.
What Is a Pipeline?
A Pipeline lets you define dependencies of jobs and create a path or a chain of builds. A pipeline helps you in running continuous integration jobs and reduce network traffic.
To create a pipeline, you design a pipeline diagram where you define the dependencies of jobs. When you create a dependency of a job over another, you define the order of automatic builds of the dependent jobs. If required, the dependent jobs can be configured to use artifacts of the parent job too.
For example, in this diagram, Job 2 depends on Job 1 and runs after Job 1 is successful.
In this diagram, Job 2, Job 3, and Job 4 depend on Job 1 and run after Job 1 is successful. Job 2, Job 3, and Job 4 are scheduled in parallel. They can all run at the same time.
This diagram shows a complex example.
The above diagram defines these dependencies:
-
Job 2 and Job 3 depend on Job 1 and run after Job 1 is successful
-
Job 4 and Job 5 depend on Job 2 and run after Job 2 is successful
-
Job 6 and Job 7 depend on Job 4 and run after Job 4 is successful
-
Job 8 depends on Job 6 and Job 7 and runs after Job 6 and Job 7 are successful
-
Job 1 is the initial job. Running it triggers a chain. All jobs after it in the chain (Job 2 through Job 8) run automatically, one after the other.
You can create multiple pipeline diagrams of jobs. If multiple pipelines have some common jobs, then multiple builds run some of those jobs. For example, in this figure, Pipeline 1 and Pipeline 2 have common jobs:
Let’s assume that Pipeline 1 is defined first and Pipeline 2 is defined second. If both pipelines are triggered, the builds run in this order:
-
A build of Job 1 runs.
-
Builds of Job 2 and Job 3 of Pipeline 1 get in the build executor queue after Job 1 is successful. A build of Job 2 of Pipeline 2 also gets in the build executor queue after Job 1 is successful.
-
Builds of jobs in build executor queue run on first-come first-served basis. So, Job 2 and Job 3 of Pipeline 1 run first. Let’s call the build as Build 1 of Job 2 and Job 3. Then, another build of Job 2 of Pipeline 2 runs. Let’s call it Build 2 of Job 2.
-
A build of Job 4 of Pipeline 1 joins the build executor queue as soon as Job 2 is successful. A build of Job 3 of Pipeline 2 also joins the queue when Job 2 is successful.
-
As soon as the build executor is available, Build 1 of Job 4 runs and Build 2 of Job 3 also runs. Remember that Build 1 of Job 3 ran in Pipeline 1.
-
After a build of Job 3 of Pipeline 2 is successful, a build of Job 4 of Pipeline 2 joins the queue and runs when the build executor is available. Remember that this is Build 2 of Job 4 as Build 1 ran in Pipeline 1.
While creating multiple pipeline diagrams with common jobs, be careful if a job is dependent on artifacts of the parent job.
Create a Pipeline
Here's how you can create a pipeline:
- In the
left navigator, click Builds
.
-
Click the Pipelines tab.
-
Click + Create Pipeline.
The Create Pipeline dialog is displayed.
-
In Name and Description, enter a unique name and a description, respectively.
-
Select the Auto start when pipeline jobs are built externally check box to trigger a pipeline build when any job in the pipeline is triggered externally (that is, started from outside the pipeline).
In the pipeline, builds of jobs that follow the started job will be run as shown in the diagram, but no builds of jobs that precede the started job will be run.
-
If you selected the Auto start option in the previous step, you can select the Auto start only when trigger jobs are built checkbox to limit the jobs that can automatically start the pipeline to trigger jobs only. This effectively excludes all jobs that aren't trigger jobs.
If both options are selected, when a non-trigger pipeline job is started manually, it won't be shown in the Pipeline Instances page. It will be shown on the Jobs Overview page instead, because the Trigger only option was selected.
-
Select the Disallow pipeline jobs to build externally when the pipeline is building check box to disable manual or automatic builds of the jobs that are part of the pipeline when the pipeline is running.
-
Click Create.
-
In the Designing Pipeline page, design the pipeline, and click Save.
Use the Pipeline Designer
You use the pipeline designer to create a pipeline diagram, that defines dependencies between jobs and the order of their builds.
The Jobs list shows all jobs of the project on the left side of the page. Drag and drop jobs to the designer area to design the pipeline diagram. Click Configure to configure the dependency condition between the parent and the child job.
Create a One-to-One Dependency
A one-to-one dependency is formed between a parent and a child job. When a build of the parent job is successful, a build of the child job runs automatically.
To create a one-to-one dependency of a child job to its parent job:
-
From the Jobs list, drag-and-drop the parent job to the designer area.
-
From the Jobs list, drag-and-drop the dependent (or child) job to the designer area.
-
To indicate the parent job, the job that triggers the pipeline build, mouse over the
handle of the Start node. The cursor icon changes to the + cursor icon:
In the above example, the Start node indicates the starting point of the pipeline. The Start node is available in all pipelines and can’t be removed.
Job 1
is the parent job andJob 2
is the dependent job. -
Drag the cursor from the Gray circle
handle to the White circle
handle. An arrow line appears:
-
Similarly, mouse-over the Blue circle
handle and drag-and-drop the arrow head over the White circle
A dependency is now formed. In the above example, Job 2
is now dependent on Job 1
. A build of Job 2
will run automatically after every Job 1
build is successful.
To delete a job node or a dependency, click to select it, and then click
Delete
.
Create a One-to-Many Dependency
A one-to-many dependency is formed between one parent job and multiple child jobs. When a build of the parent job is successful, builds of child jobs run automatically.
To create a one-to-many dependency between jobs:
-
From the Jobs list, drag-and-drop the parent job to the designer area.
-
From the Jobs list, drag-and-drop all dependent (or child) jobs to the designer area:
Here,Job 1
is the parent job andJob 2
,Job 3
, andJob 4
are the dependent jobs. -
To indicate the parent job, the job that triggers the pipeline build, mouse over the Gray circle
handle of the Start node. The cursor icon changes to the + cursor icon.
-
Drag the cursor from the Gray circle
handle to the White circle
handle of the job. An arrow line appears:
-
Similarly, mouse-over the Blue circle
handle of the parent job and drag-and-drop the arrow head over the White circle
of the child jobs:
A dependency is now formed. In the above example, Job 2
, Job 3
, and Job 4
are now dependent on Job 1
. A build of Job 2
, Job 3
, and Job 4
runs automatically after every Job 1
build is successful.
To delete a job node or a dependency, click to select it, and then click
Delete
.
Create a Many-to-One Dependency
A many-to-one dependency is formed between multiple parent jobs and one child job. When builds of all parent jobs are successful, a build of the child job runs automatically.
To create a many-to-one dependency on parent jobs with a child job:
-
From the Jobs list, drag-and-drop all parent jobs to the designer area.
-
From the Jobs list, drag-and-drop the dependent (or child) jobs to the designer area:
Here,
Job 2
,Job 3
, andJob 4
are the parent jobs andJob 5
is the dependent job. -
To indicate the parent job, the job that triggers the pipeline build, mouse over the Gray circle
handle of the Start node. The cursor icon changes to the + cursor icon.
-
Drag the cursor from the Gray circle
handle to the parent job's White circle
handle. Repeat the steps for all parent nodes.
-
Drag the cursor from the parent job's White circle
handle. An arrow line appears. Repeat the steps for all parent nodes.
-
Similarly, mouse over the parent job's Blue circle
handle and drag-and-drop the arrow head over the dependent job's White circle
handle:
A dependency is now formed. In the above example, Job 5
is dependent on Job 2
, Job 3
, and Job 4
. A build of Job 5
will run automatically after Job 2
, Job 3
, and Job 4
are successful.
To delete a job node or a dependency, click to select it, and then click
Delete
.
Configure the Dependency Condition
When you create a dependency between a parent and a child job, by default, a build of the child job runs after the parent job’s build is successful. You can configure the dependency to run a build of the child job after the parent job’s build fails:
-
In the pipeline designer, click to select the dependency condition arrow.
-
In the pipeline designer toolbar, click Configure
.
-
In the pipeline flow config editor, in Result Condition, select Successful, Failed, or Test Failed. If you want to select more than one dependency condition, you can click in the Result Condition field again and select another condition.
You can also double-click the dependency arrow to open the pipeline flow config editor. You can’t configure the dependency condition from the Start node.
Note:
If you configure the pipeline using YAML, you'll have access to additional options that aren't available in the UI. However, the dependency conditions in YAML all map to the three that you have access to in the UI. See Set Dependency Conditions in Pipelines Using YAML. -
Click Apply.
Manage Pipelines
You can manage a pipeline by editing the pipeline diagram from the Configure Pipeline page:
Action | How To |
---|---|
Design the pipeline diagram |
In the Pipelines tab, for the pipeline whose diagram you want
to edit, Configure
|
Run a pipeline |
To run all jobs of a pipe in the defined order, in the
Pipelines tab, click
Build
|
View a pipeline’s instances |
When you trigger a pipeline, an instance of the pipeline is created. To view a pipeline's instances, click its name in the Pipelines tab. The Pipeline Instances page displays the pipeline run history. For each pipeline instance, the page shows the pipeline diagram and its status. A pipeline's status is determined its jobs' builds:
You can see the status of jobs in a pipeline instance by looking at the instance in the UI. In the pipeline diagram, the color of job nodes indicates the job’s status:
|
View a pipeline's instance log |
You can select View Log on the Pipeline Instances page to see a historical record of actions taken by the pipeline. Sometimes the log is helpful to see why a pipeline didn't advance when you expected it to. You can use View Log to see who started the pipeline, when each build was run, and the status of each build job in the pipeline. Note: You can't use View Log to display logs that were created before 19.4.3. Those logs appear empty.To see the full log for each build job in the pipeline instance, you'll need to navigate to each specific build job log. If you select a specific build job in the Pipeline Instance page and click on its build number, you'll go to the Build Details page, from which you can access the log for that build. Just click Build Log to see specific details for that build job. |
Edit a pipeline |
In the Pipelines tab, for the pipeline you want to edit,
Configure
|
Delete a pipeline |
In the Pipelines tab, click Delete
When you delete a pipeline, you're removing the dependency or the order of job builds. The jobs aren’t being deleted. |
Add or Export Parameters and Parameter Lists
You might want to set or change parameters in a pipeline job that can be used by downstream jobs in the same pipeline. Or, you might want to add parameters at the start of job execution, based upon data from a Git clone operation, when the cloned repository contains information that will become the value of new parameters or override the value of existing parameters. This parameter can then be used to configure subsequently run jobs, and appear as environment variables in shell scripts run in the build. You might even want to export parameters at the end of job execution, based upon data that was calculated during the build.
Both added and exported parameters would be visible to downstream jobs, which could, in turn, modify a subset of the parameters and then pass them along.
- From a list of parameter definitions that are written in the same manner that
environment variables are set in a shell script, that is, from a file with one
or more lines that contain
PARAMETER_NAME=value
definitions. - With multi-line values, such as private keys for example, and parameters with sensitive contents like passwords and private keys. These include items that are more complex than those that can be specified using the simple definition format.
All jobs in a pipeline currently see job parameters that have been configured for all jobs in the pipeline. These parameters are collected from the jobs when the pipeline is started, and are added to all jobs that are downstream of the condition that started the pipeline, that is of a triggering job or the “Start” node of a manually-started or a periodically-triggered pipeline. Then, when a job completes, its parameters are extracted and are passed on to any downstream jobs it triggers.
These job parameters can be modified and new parameters can be added in subsequent build steps during a run. This simply adds a way to explicitly direct that parameters be added, both before and after Build steps, like shell scripts, run.
Add a Parameter
The Add a Parameter task runs during build setup, after the Git steps finish running, but before any build steps are run. The task adds a single parameter at a time, potentially allowing a multi-line value (a private key, for example) read from a file, and allows the parameter to be marked sensitive which, by the way, means “Don't print these values in the build log.” You can configure zero or more of these in a job.
Here's how you configure a pre-build task that adds a parameter (or multiple parameters) to a build job:
- In the
left navigator, click
Builds
.
- In the Jobs overview page, select the job you want to modify and the Jobs Detail page will display.
- Click Configure.
This displays the Job Configuration page.
- In the Git tab, click Add Git and select the repository where the file with the parameter is stored.
- Click the Before Build tab.
- Click Add Before Build Action and select Add Parameter.
- In Parameter name, enter the name of the parameter.
- In File containing parameter value, enter the name of the file that contains the value for the parameter.
- Select the Sensitive checkbox to prevent printing the value of parameters with sensitive contents, like passwords and private keys parameters, in the build log.
- Repeat steps 6-8 to add multiple parameters.
- Click Save.
Add a Parameter List
The Add a Parameter List task runs during build setup, after
the Git steps finish running, but before any build steps are run. The task reads a
list, one per line, of one or more parameter definitions in the form
PARAMETER_NAME=value
and sets the job parameters accordingly. You
can configure zero or more of these in a job.
Here's how you configure a pre-build task that adds a parameter list to a build job:
- In the
left navigator, click
Builds
.
- In the Jobs overview page, select the job you want to modify and the Jobs Detail page will display.
- Click Configure.
This displays the Job Configuration page.
- In the Git tab, click Add Git and select the repository where the file with the parameter list is stored.
- Click the Before Build tab.
- Click Add Before Build Action and select Add Parameter List.
- Enter the name of the file that contains the parameter definitions.
- Click Save.
Export a Parameter
The Export a Parameter task runs after the build steps have been run. The task adds a single parameter at a time, to allow multi-line values ( a private key, for example) to be read from a file. It also allows the parameter to be marked sensitive which, by the way, means “don't print these values in the build log.” You can configure zero or more of these in a job.
Here's how you configure a post-build task that exports a parameter (or multiple parameters) that can be passed to a downstream build job:
- In the
left navigator, click
Builds
.
- In the Jobs overview page, select the job you want to modify and the Jobs Detail page will display.
- Click Configure.
This displays the Job Configuration page.
- In the Git tab, click Add Git and select the repository with the file where the value for the parameter will be written.
- Click the After Build tab.
- Click Add After Build Action and select Export Parameter.
- In Parameter name, enter the name of the parameter to be exported.
- In File containing parameter value, enter the name of the file to write the value for the parameter.
- Select the Sensitive checkbox to prevent printing the value of parameters with sensitive contents, like passwords and private keys parameters, in the build log.
- Repeat steps 6-8 to export multiple parameters.
- Click Save.
Export a Parameter List
The Export a Parameter List task runs after the build steps
have been run. The task reads a list, one per line, of one or more parameter
definitions in the form PARAMETER_NAME=value
and sets job parameters
accordingly. You can configure zero or more of these in a job.
Here's how you configure a post-build task that exports a parameter list that can be used by a downstream build job:
- In the
left navigator, click
Builds
.
- In the Jobs overview page, select the job you want to modify and the Jobs Detail page will display.
- Click Configure.
This displays the Job Configuration page.
- In the Git tab, click Add Git and select the repository with the file where the values for the parameter list will be written.
- Click the After Build tab.
- Click Add After Build Action and select Export Parameter List.
- Enter the name of the file where the parameter definitions used in the build job will be written.
- Click Save.
Configure Jobs and Pipelines with YAML
YAML (YAML Ain't Markup Language) is a human-readable data serialization language that is commonly used for configuration files. To find more about YAML, see https://yaml.org/
.
In VB Studio, you can use a YAML file (a file with .yml
extension) to store a job or
pipeline configuration in any of the project's Git repositories. The build system constantly
monitors the Git repositories and, when it detects a YAML file, creates or updates a job or a
pipeline with the configuration specified in the YAML file.
Here's an example with a YAML file that configures a job:
job:
name: MyFirstYAMLJob
vm-template: Basic Build Executor Template
git:
- url: "https://mydevcsinstance-mydomain.developer.ocp.oraclecloud.com/mydevcsinstance-mydomain/s/mydevcsinstance-mydomain_my-project_902/scm/employee.git"
branch: main
repo-name: origin
steps:
- shell:
script: "echo Build Number: $BUILD_NUMBER"
- maven:
goals: clean install
pom-file: "employees-app/pom.xml"
after:
- artifacts:
include: "employees-app/target/*"
settings:
- discard-old:
days-to-keep-build: 5
builds-to-keep: 10
days-to-keep-artifacts: 5
artifacts-to-keep: 10
What Are YAML Files Used for in VB Studio?
All YAML files must reside in the .ci-build
directory in the root
directory of any hosted Git repository's main
branch. YAML files in
other branches will be ignored. Any text file that has a .yml
file
extension and resides in the main
branch's .ci-build
directory is considered to be a YAML configuration file. Each YAML file can contain
configuration data for exactly one job or one pipeline. You can have YAML files in
multiple Git repositories, or use a separate Git repository to host all your YAML
configuration files. You cannot, however, use an external Git repository to host YAML
files. Because these configuration files are stored using Git, you can track changes
made to the job or pipeline configuration and, if a job or pipeline is deleted, you can
use the configuration file to recreate it.
The build system constantly monitors the project's Git repositories. When it detects
an update to a file with the .yml
extension in the
.ci-build
directory of a Git repository's main
branch, it scans the file to determine if it is a job or a pipeline, and creates or
updates the corresponding job or pipeline. First, it verifies whether the job or the
pipeline of the same name (as in the configuration file) exists on the Builds page. If the job or the pipeline exists, it's updated. If the name of the job or
pipeline has changed in the configuration file, it's renamed. If the job or the pipeline
doesn't exist, it's created.
Note:
Jobs and pipelines created with YAML can't be edited on the Builds page. They must be edited using YAML. Similarly, jobs and pipelines created on the Builds page can't be edited using YAML.YAML stores data as a key-value pair in the field: value
format. A
hyphen (-) before a field identifies it as an array or a list. It must be indented to
the same level as the parent field. To indent, always use spaces, not tabs. Make sure
that number of indent spaces before a field name matches the number of indented spaces
in the template. YAML is sensitive to number of spaces used to indent fields. Also, the
field names in a YAML file are similar to the field names in the job configuration user
interface:
name: MyFirstYAMLJob vm-template: Basic Build Executor Template git: - url: "https://mydevcsinstance-mydomain/.../scm/employee.git" steps: - shell: script: "echo Build Number: $BUILD_NUMBER" - maven: goals: clean install pom-file: "employees-app/pom.xml"
If you're editing a YAML file on your computer, always use a text editor with the UTF-8 encoding. Don't use a word processor.
Here are some additional points to consider about YAML files before you begin creating or editing them:
- The
name
field in the configuration file defines the job's or pipeline's name. If no name is specified, the build system creates a job or a pipeline with name as<repo-name>_<name>
, whererepo-name
is the name of the Git repository where the YAML file is hosted and<name>.yml
is the name of the YAML file.For example, if the YAML file's name is MyYAMLJob and it's hosted in the YAMLJobs Git repository, then the job's or pipeline's name would be YAMLJobs_MyYAMLJob.
If you add the
name
field later, the job or pipeline will be renamed. Its access URL will also change. - Each job's configuration must define the
vm-template
field. - When you define a string value, you can use quotes, if necessary. If any string
values contain special characters, always enclose the values with quotes.
Here are some examples of special characters:
*
,:
,{
,}
,[
,]
,,
,&
,#
,?
,|
,-
,<
,>
,=
,!
,%
,@
,`
.You can use single quotes (
' '
) or double quotes (" "
). To include a single quote in a single quoted string, escape the single quote by prefixing it with another single quote. For example, to setDon's job
in thename
field, usename=Don''s job
in your YAML file. To use a double quote in a double quoted string, escape the double quote with a backslash (\
) character. For example, to setMy "final" job
in thename
field, usename=My \"final\" job
in your YAML file. There's no need to escape backslashes in a single quoted string. - Named Password/Private Key parameters must be specified in the format
#{PSSWD_Docker}
surrounded by quotes, as shown in bold in the following example:params: - string: name: myUserName value: "don.developer" description: My Username steps: - docker-login: username: $myUserName password: "#{PSSWD_Docker}"
Password/Private Key parameters are specified using the format
$myPassword
, as shown in bold in the following example:params: - string: name: myUserName value: "don.developer" description: My Username - password: name: myPwd password: #{PSSWD_Docker} description: Defining the build password steps: - docker-login: username: $myUserName password: $myPwd
- If you specify a field name but don't specify a value, YAML assumes the value to be
null
. This can cause errors. If you don't need to define a value for a field, you should remove the field name.For example, if you don't want to define Maven goals and use the default
clean install
, remove thegoals
field. The following YAML code can cause error becausegoals
isn't defined:steps: - shell: script: "echo Build Number: $BUILD_NUMBER" - maven: goals: pom-file: "employees-app/pom.xml"
- You don't need to define every one of the job's fields in the YAML file. Just define
the ones you want to configure or change from the default values, and make sure that
you're adding the parent field(s) when you define a child field:
steps: - maven: pom-file: "employees-app/pom.xml"
- To run a build of the job automatically when its Git repository is updated, use the
auto
field or setbuild-on-commit
totrue
.For the current Git repository, using
auto
is equivalent to settingbuild-on-commit
totrue
. So, don't useauto
andbuild-on-commit: true
together.Here's an example that uses
auto
:name: MyFirstYAMLJob vm-template: Basic Build Executor Template auto: branch: patchset_1
If you use
auto
, don't specify the Git repository URL. The job automatically tracks the Git repository where the YAML file is committed.Here's an example that uses
build-on-commit
:name: MyFirstYAMLJob vm-template: Basic Build Executor Template git: - url: "https://mydevcsinstance-mydomain.developer.ocp.oraclecloud.com/mydevcsinstance-mydomain/s/mydevcsinstance-mydomain_my-project_902/scm/employee.git" branch: patchset_1 build-on-commit: true
A commit when pushed to the
patchset_1
branch triggers a build of the MyFirstYAMLJob job. - To add comments in the configuration file, precede the comment with the pound sign
(
#
):steps: # Shell script - shell: script: "echo Build Number: $BUILD_NUMBER"
- On the Builds page, to configure an existing job or a pipeline, click its Configure button or icon. If the job or the pipeline was created in YAML, VB Studio opens the YAML file in the code editor on the Git page so you can view or edit the configuration.
- The branch value is dependent on the default branch of the repository that is
specified in the YAML. If the head of the Git repository is
main
, then that is the default. If the head ismaster
, then that will be the default.The default behavior has been dependent on the head of the Git repository. Until this release, though, that has always been
master
.
REST API for Accessing YAML Files
You can use an API testing tool, such as Postman, or curl commands to run REST API methods. To run curl commands, either download curl to your computer or use the Git CLI to run curl commands.
To create the REST API URL, you need your VB Studio user name and password, the base URL of your instance, the unique organization ID, and the project ID, which you can get from any of the project's Git repository URLs.
In a Git repository URL, the project's ID is located before
/scm/<repo-name>.git
. For example, if
https://alex.admin%40example.com@mydevcsinstance-mydomain.developer.ocp.oraclecloud.com/mydevcsinstance-mydomain/s/mydevcsinstance-mydomain_my-project_123/scm/NodeJSDocker.git
is the Git repository's URL in a project, the project's unique ID will be
mydevcsinstance-mydomain_my-project_123.
How Do I Validate a Job or Pipeline Configuration?
To validate a job (or pipeline) configuration, use this URL with the syntax shown, passing in the local (on your computer) YAML file as a parameter:
https://<base-url>/<identity-domain>/rest/<identity-domain>_<unique-projectID>/cibuild/v1/yaml/validate
Here's an example with a curl command that validates a job configuration on a Windows computer:
curl -X POST -H "Content-Type: text/plain" --data-binary
@d:/myApps/myPHPapp/.ci-build/my_yaml_job.yml -u
alex.admin@example.com:My123Password
https://mydevcsinstance-mydomain.developer.ocp.oraclecloud.com/myorg/rest/myorg_my-project_1234/cibuild/v1/yaml/validate
Here's an example with a curl command that validates a pipeline configuration on a Windows computer:
curl -X POST -H "Content-Type: text/plain" --data-binary
@d:/myApps/myPHPapp/.ci-build/my_yaml_pipeline.yml -u
alex.admin@example.com:My123Password
https://mydevcsinstance-mydomain.developer.ocp.oraclecloud.com/myorg/rest/myorg_my-project_1234/cibuild/v1/yaml/validate
Create a Job or a Pipeline Without Committing the YAML File
You can create a job or pipeline without first committing its YAML file to your project's Git repository. To do so, use a URL with this syntax, passing in a local (on your computer) YAML file as a parameter:
https://<base-url>/<identity-domain>/rest/<identity-domain>_<unique-projectID>/cibuild/v1/yaml/cibuild/yaml/import
VB Studio will read the YAML job (or pipeline) configuration and, if no errors are detected, create a new job (or pipeline). The job (or pipeline) must be explicitly named in the YAML configuration. After the job (or pipeline) has been created, you can edit its configuration on the Builds page. If errors are detected, the job (or pipeline) will not be created and the Recent Activities feed will display any error messages.
Here's an example that shows how to use a curl command with a YAML file on a Windows computer to create a job:
curl -X POST -H "Content-Type: text/plain" --data-binary
@d:/myApps/myPHPapp/my_PHP_yaml_job.yml -u
alex.admin@example.com
https://mydevcsinstance-mydomain.developer.ocp.oraclecloud.com/myorg/rest/myorg_my-project_1234/cibuild/v1/yaml/import
You'll be prompted for the password:
Enter host password for user 'alex.admin':
How Do I Use YAML to Create or Configure a Job?
You can use YAML for creating a new job or configuring an existing one:
If you create the YAML file this way, you won't be able to validate it without committing it first. Commit the file and check the Recent Activities Feed on the Project Home page for any errors.
What Is the Format for a YAML Job Configuration?
In a YAML job configuration, any field with a value of ""
accepts a
string value that is empty by default. ""
is not a valid value for some
fields, such as name
, vm-template
, and
url
. If you want a field to use its default value, remove the field
from the YAML file.
When you configure a job, fields such as name
, description
, vm-template
, and auto
must precede groups like git
, params
, and steps
.
Here's a job's YAML configuration format with the default values:
job:
name: ""
description: ""
vm-template: "" # required
auto: false # deprecated - true implies branch: master; otherwise, set branch explicitly
auto:
branch: mybranch # deprecated
# See Auto specification section below
auto: mybranch # automatically build a single branch on commit
auto: "*" # automatically build any branch on commit
auto:
include: # array of branches or branch patterns to include, for example
- "*" # automatically build any branch on commit
except: # array of exceptions (optional)
- "" # except these branches
auto:
exclude: # array of branches or branch patterns to exclude
- "" # default exclude nothing (include everything)
except: # array of exceptions (optional)
- "" # but including these branches
from-job: "" # create job as copy of another job; ignored after creation
for-merge-request: false
allow-concurrent: false # if true, concurrent builds will be allowed if necessary
disabled: false # if true, job will not build
#
# disabled=true/false can be specified for every item in the job below
# e.g., for git, param, steps, etc. items
# for brevity, not shown below in every item
git:
- url: "" # required
branch: "master" # branch: * is treated specially; see the Auto build section above
repo-name: "origin"
local-git-dir: ""
refspec: ""
included-regions: "" # deprecated - see trigger-when, file-pattern, and exceptions
excluded-regions: "" # deprecated - see trigger-when, file-pattern, and exceptions
trigger-when: INCLUDE # one of INCLUDE or EXCLUDE
file-pattern: "" # default is "**/*" for INCLUDE or "" for EXCLUDE
exceptions: "" # exceptions to INCLUDE or EXCLUDE file-pattern
excluded-users: ""
merge-branch: ""
config-user-name: ""
config-user-email: ""
merge-from-repo: false
merge-repo-url: ""
checkout-revision: ""
prune-remote-branches: false
skip-internal-tag: true
clean-after-checkout: false
update-submodules: false
use-commit-author: false
wipeout-workspace: false
build-on-commit: false
# When build-on-commit: true, the "auto" branch can be specified as follows:
include: # A list of branches to include
- "*" # Branch name, wildcard like "*" or regular expressions like /.*/ are allowed
except: # Except do not include the branches in this list
- "/^patchset_/" # Branch name, example regular expression shown
# Or
exclude: # A list of branches to exclude (all branches not excluded are included)
- "/^patchset_/" # Branch name, example regular expression shown
except: # Except do not exclude the branches in this list
- patchset_21_07_0 # Branch name, example literal branch name shown
params:
# boolean, choice, and string parameters can be specified as string values of the form - NAME=VALUE
# the VALUE of a boolean parameter must be true or false, e.g., - BUILD_ALL=true
# the VALUE of a choice parameter is a comma-separated list, e.g., - PRIORITY=NORMAL,HIGH,LOW
# the VALUE of a string parameter is anything else, e.g., - URL=https://github.com
# Alternatively, parameters can be specified as objects:
- boolean:
name: "" # required
value: true # required
description: ""
- choice:
name: "" # required
description: ""
choices: [] # array of string value choices; at least one required
- merge-request:
params:
- GIT_REPO_BRANCH="" # required
- GIT_REPO_URL="" # required
- MERGE_REQ_ID=""
- password:
name: "" # required
# one of password or private-key is required
# recommended to use named password/private key reference like "#{NAME}"
password: "" # required, or
private-key: "" # required
description: ""
- string:
name: "" # required
value: "" # required
description: ""
before:
- add-param: # Add a parameter after git before rest of build
parameter-name: "" # required - name of added parameter
file-path: "" # required - file that contains value of parameter
sensitive: false # true if sensitive, e.g., password or private key
- add-params: # Add one or more parameters as above (cannot be used to add password parameters)
file-path: "" # required - file that contains one or more lines of the format
# NAME=value
- copy-artifacts:
from-job: ""
build-number: 1 # requires which-build: SPECIFIC_BUILD
artifacts-to-copy: ""
target-dir: ""
which-build: "LAST_SUCCESSFUL" # other choices: LAST_KEEP_FOR_EVER, UPSTREAM_BUILD, SPECIFIC_BUILD, PERMALINK, PARAMETER
last-successful-fallback: false
permalink: "LAST_SUCCESSFUL" # other choices: LAST, LAST_SUCCESSFUL, LAST_FAILED, LAST_UNSTABLE, LAST_UNSUCCESSFUL
# other choices require which-build: PERMALINK
param-name: "BUILD_SELECTOR" # requires which-build: PARAMETER
flatten-dirs: false
optional: false
- npm-registry-setup:
use-current-project-registry: true # true to use current project's Built-in NPM registry
# otherwise, specify one of registry-url or connection
connection: "" # required if use-current-project-registry is false and registry-url is empty
username: "" # required if registry at registry-url requires authentication
password: "" # required if username is specified
registry-url: "" # required if use-current-project-registry is false and connection is empty
custom-npmrc: "" # optional path to a custom .npmrc from the workspace
- oracle-maven:
connection: "" # required if otn-login or otn-password is empty
otn-login: "" # required if connection is empty
otn-password: "" # required if connection is empty
server-id: ""
settings-xml: ""
- security-check:
perform-analysis: false # true to turn on security dependency analyzer of maven builds
create-issues: false # true to create issue for every affected pom file
fail-build: false # true to fail build if vulnerabilities detected
severity: "low" # low (CVSS >= 0.0), medium (CVSS >= 4.0), high (CVSS >= 7.0)
confidence: "low" # low, medium, high, highest
product: "" # required if create-issues true; "1" for Default
component: "" # required if create-issues true; "1" for Default
- ssh:
config:
private-key: "" # optional if ssh-tunnel: password specified
public-key: ""
passphrase: ""
server-public-key: "" # leave empty to skip host verification
setup-ssh: true. # true if setup files in ~/.ssh for cmd line tools
ssh-tunnel: false
username: "" # required if ssh-tunnel true
password: "" # optional if ssh-tunnel true and private-key specified
local-port: 0 # required if ssh-tunnel true
remote-host-name: "localhost" # optional if ssh-tunnel true
remote-port: 0 # required if ssh-tunnel true
ssh-host-name: "" # required if ssh-tunnel true (name or IP)
- sonarqube-setup:
sonar-server: "" # required Server Name as configured in Builds admin
- xvfb:
display-number: "0"
screen-offset: "0"
screen-dimensions: "1024x768x24"
timeout-in-seconds: 0
more-options: "-nolisten inet6 +extension RANDR -fp /usr/share/X11/fonts/misc"
log-output: true
shutdown-xvfb-after: true
steps:
- abort-customization-set:
customization-set-id: "" # required
environment-name: "" # required
service-name: "" # required
username: # required
password: # required
- ant:
build-file: ""
targets: ""
properties: ""
java-options: ""
- application-ext-packaging:
build-artifact: "extension.vx" # optional, defaults to 'extension.vx'
version: ""
- application-ext-delete:
v2: # true for V2 app extensions; false for V1 (default: false)
app-id: # required for V1 app extensions; unused for V2
extension-id: # required
extension-version: # required
environment-name: # required
service-name: # required
username: # required
password: # required
- apply-customization-set:
customization-set-id: "" # required
environment-name: "" # required
service-name: "" # required
username: # required
password: # required
- bmccli:
private-key: ""
user-ocid: "" # required
fingerprint: "" # required
tenancy: "" # required
region: "us-phoenix-1" # current valid regions are: us-phoenix-1, us-ashburn-1, eu-frankfurt-1, uk-london-1
# more may be added - check OCI configuration
- docker-certificate:
registry-host: "" # required
certificate: "" # required
- docker-build: # docker commands require vm-template with software bundle 'Docker'
source: "DOCKERFILE" # other choices: DOCKERTEXT, URL
path: "" # docker file directory in workspace
docker-file: "" # Name of docker file; if empty use Dockerfile
options: ""
image:
registry-host: ""
registry-id: ""
image-name: "" # required
version-tag: ""
docker-text: "" # required if source: DOCKERTEXT otherwise not allowed
context-root-url: "" # required if source: URL otherwise not allowed
- docker-image:
options: ""
image:
registry-host: ""
registry-id: ""
image-name: ""
version-tag: ""
- docker-load:
input-file: "" # required
- docker-login:
registry-host: ""
username: "" # required
password: "" # required
- docker-pull:
options: ""
image:
registry-host: "" # required
registry-id: ""
image-name: "" # required
version-tag: ""
- docker-push:
options: ""
image:
registry-host: "" # required
registry-id: ""
image-name: "" # required
version-tag: ""
- docker-rmi:
remove: "NEW" # other options: ONE, ALL
options: ""
image: # only if remove: ONE
registry-host: "" # required
registry-id: ""
image-name: "" # required
version-tag: ""
- docker-save:
output-file: # required
image:
registry-host: "" # if omitted Docker Hub is assumed
registry-id: ""
image-name: "" # required
version-tag: ""
- docker-tag:
source-image:
registry-host: "" # required
registry-id: ""
image-name: "" # required
version-tag: ""
target-image:
registry-host: "" # required
registry-id: ""
image-name: "" # required
version-tag: ""
- docker-version:
options: ""
- export-customization-set:
sandbox-name: "" # required
description: ""
id-parameter-name: "CUSTOMIZATION_SET_ID" # optional, defaults to 'CUSTOMIZATION_SET_ID'
include-all-modules: false
optional-modules: "" # Comma-separated list of (zero or more) Optional Module names or codes, eg. "CRM,BI"
move-all-changes: false
skip-target-check: false
environment-name: "" # required
service-name: "" # required
username: # required
password: # required
- fn-build:
build-args: ""
work-dir: ""
use-docker-cache: true
verbose-output: false
registry-host: ""
username: ""
- fn-bump:
work-dir: ""
bump: "--patch" # other choices: "--major", "--minor"
- fn-deploy:
deploy-to-app: "" # required
build-args: ""
work-dir: ""
deploy-all: false
verbose-output: false
use-docker-cache: true
no-version-bump: true
do-not-push: true
registry-host: ""
username: ""
api-url: "" # required
- fn-oci:
compartment-id: "" # required
provider: ""
# Note: the passphrase field is no longer required nor allowed
- fn-push:
work-dir: ""
verbose: false
registry-host: ""
username: ""
- fn-version: {}
- gradle:
use-wrapper: false
wrapper-gradle-version: "" # ignored unless use-wrapper: true
make-executable: false # ignored unless use-wrapper: true, then default true
# must set make-executable: true if wrapper doesn't already exist
# corresponds to Create 'gradlew' wrapper
from-root-build-script-dir: false # ignored unless use-wrapper: true
root-build-script: "" # ignored unless from-root-build-script-dir: true; script directory
tasks: "clean build"
build-file: "build.gradle"
switches: ""
use-workspace-as-home: false
description: ""
use-sonar: false # if true sonarqube-setup must be configured
- import-customization-set:
customization-set-id: "" # required
ignore-unpublished-sandboxes: false
environment-name: "" # required
service-name: "" # required
username: # required
password: # required
- maven:
goals: "clean install"
pom-file: "pom.xml"
private-repo: false
private-temp-dir: false
offline: false
show-errors: false
recursive: true
profiles: ""
properties: ""
verbosity: NORMAL # other choices: DEBUG, QUIET
checksum: NORMAL # other choices: STRICT, LAX
snapshot: NORMAL # other choices: FORCE, SUPPRESS
projects: ""
resume-from: ""
fail-mode: NORMAL # other choices: AT_END, FAST, NEVER
make-mode: NONE # other choices: DEPENDENCIES, DEPENDENTS, BOTH
threading: ""
jvm-options: ""
use-sonar: false # if true, sonarqube-setup must be configured
- nodejs:
source: SCRIPT # other choice: FILE
file: "" # only if source: FILE
script: "" # only if source: SCRIPT
- oic-delete-integration:
environment-name: "" # required, identifies the enviroment containing the OIC instance
service-name: "" # required, the OIC instance for the operation
username: "" # required
password: "" # required
identifier: "" # required, the uppercase integration identifier
version: "" # required, the integration version
- oic-delete-package:
environment-name: "" # required, identifies the enviroment containing the OIC instance
service-name: "" # required, the OIC instance for the operation
username: "" # required
password: "" # required
package-name: "" # required, the name of the package to delete
deactivate-integrations: false # if true, automatically deactivate integrations before deleting package
- oic-export-integration:
environment-name: "" # required, identifies the enviroment containing the OIC instance
service-name: "" # required, the OIC instance for the operation
username: "" # required
password: "" # required
identifier: "" # required, the uppercase integration identifier
version: "" # required, the integration version
include-recording-flag: false
- oic-export-package:
environment-name: "" # required, identifies the enviroment containing the OIC instance
service-name: "" # required, the OIC instance for the operation
username: "" # required
password: "" # required
package-name: "" # required, the name of the package to export
include-recording-flag: false
- oic-import-integration:
environment-name: "" # required, identifies the enviroment containing the OIC instance
service-name: "" # required, the OIC instance for the operation
username: "" # required
password: "" # required
integration-archive: "" # required, the filename of the integration archive file (<IDENTIFIER>_<VERSION>.iar)
include-recording-flag: false
activate: false # see https://docs.oracle.com/en/cloud/paas/integration-cloud/integrations-user/activate-integration.html
oracle-recommends-flag: true
record-enabled-flag: false
tracing-enabled-flag: false
payload-tracing-enabled-flag: false
- oic-import-package:
environment-name: "" # required, identifies the enviroment containing the OIC instance
service-name: "" # required, the OIC instance for the operation
username: "" # required
password: "" # required
package-archive: "" # required, the filename of the package archive file (<packagename>.par)
include-recording-flag: false
- oracle-deployment: # currently Visual Applications, Application Extensions, and JCS using REST are supported
environment-name: "" # required, scopes the service-name
service-name: "" # required, the service instance type determines the deployment type
username: "" # required if Visual Application or Application Extension deployment
# required if JCS deployment, and then it is the weblogic username
password: "" # required if Visual Application or Application Extension deployment
# required if JCS, and then it is the weblogic user's password
application-version: "" # optional if Visual Application (defaults from visual-application.json), else n/a
application-profile: "" # optional if Visual Application, else n/a
include-application-version-in-url: true # required if Visual Application, other choice: false
data-management: "KEEP_EXISTING_ENVIRONMENT_DATA" # required if Visual Application, other choice: "USE_CLEAN_DATABASE"
sources: "" # optional if Visual Application (defaults to build/sources.zip), else unused
build-artifact: "" # optional if Visual Application (defaults to build/built-assets.zip), else unused
# required if Application Extension
# required if JCS
application-name: "" # required if JCS, else n/a
weblogic-version: "" # required if JCS (one of 12.2.x or 12.1.x)
https-port: "7002" # required if JCS
admin-port: "9001" # required if JCS
protocol: "REST" # required if JCS (one of REST, REST1221, SSH)
targets: "" # required if JCS, one or more names of target service or cluster, comma-separated
- psmcli:
username: "" # required
password: "" # required
identity-domain: "" # required
region: US # other choice: EMEA
output-format: JSON # other choice: HTML
- restore-customization-set:
customization-set-id: "" # required
environment-name: "" # required
service-name: "" # required
username: # required
password: # required
- shell:
script: ""
xtrace: true
verbose: false # both verbose and xtrace cannot be true
use-sonar: false # if true sonarqube-setup must be configured
- sqlcl:
username: ""
password: ""
credentials-file: ""
connect-string: ""
source: SQLFILE # other choice: SQLTEXT
sql-file: "" # only if source: SQLFILE
sql-text: "" # only if source: SQLTEXT
role: DEFAULT # other choices: SYSDBA, SYSBACKUP, SYSDG, SYSKM, SYSASM
restriction-level: DEFAULT # other choices: LEVEL_1, LEVEL_2, LEVEL_3, LEVEL_4
- vbappops-export-data:
environment-name: # required
service-instance: # required
vb-project-id: # required
vb-project-version: # required
username: # required
password: # required
app-data-file: # required
- vbappops-import-data:
environment-name: # required
service-instance: # required
vb-project-id: # required
vb-project-version: # required
username: # required
password: # required
app-data-file: # required
- vbappops-lock-app:
environment-name: # required
service-instance: # required
vb-project-id: # required
vb-project-version: # required
username: # required
password: # required
- vbappops-unlock-app:
environment-name: # required
service-instance: # required
vb-project-id: # required
vb-project-version: # required
username: # required
password: # required
- vbappops-undeploy-app:
environment-name: # required
service-instance: # required
vb-project-id: # required
vb-project-version: # required
username: # required
password: # required
- vbappops-rollback-app:
environment-name: # required
service-instance: # required
vb-project-id: # required
vb-project-version: # required
username: # required
password: # required
- visual-app-packaging:
sources: "build/sources.zip" # optional, defaults to 'build/sources.zip'
build-artifact: "build/built-assets.zip" # optional, defaults to 'build/built-assets.zip'
optimize: true # boolean
after:
- artifacts:
include: "" # required
exclude: ""
maven-artifacts: false
include-pom: false # ignored unless maven-artifacts: true
- export-param: # export a parameter to downstream jobs in pipeline
parameter-name: "" # required - name of added parameter
file-path: "" # required - file that contains value of parameter
sensitive: false # true if sensitive, e.g., password or private key
- export-params: # export one or more parameters to downstream jobs in pipeline
file-path: "" # required - file that contains one or more lines of the format
# NAME=value
- git-push:
push-on-success: false
merge-results: false
tag-to-push: ""
create-new-tag: false
tag-remote-name: "origin"
branch-to-push: ""
branch-remote-name: "origin"
local-git-dir: ""
- javadoc:
javadoc-dir: "target/site/apidocs"
retain-for-each-build: false
- junit:
include-junit-xml: "**/surefire-reports/*.xml"
exclude-junit-xml: ""
keep-long-stdio: false
organize-by-parent: false
fail-build-on-test-fail: false
archive-media: true
- sonarqube: # sonarqube-setup must be configured
replace-build-status: true # Apply SonarQube quality gate status as build status
archive-analysis-files: false
settings:
- abort-after:
hours: 0
minutes: 0
fail-build: false
- build-retry:
build-retry-count: 5
git-retry-count: 5
- discard-old:
days-to-keep-build: 0
builds-to-keep: 100
days-to-keep-artifacts: 0
artifacts-to-keep: 20
- git-poll:
cron-pattern: "0/30 * * * * #Every 30 minutes"
- log-size:
max: 50 # megabytes
- logger-timestamp:
timestamp: true
- periodic-build:
cron-pattern: "0/30 * * * * #Every 30 minutes"
- quiet-period:
seconds: 0
- versions:
version-map:
Java: "8" # For templates the options (with defaults wrapped in '*' chars) are
# Java: 7, *8*, 11, 15, 8 (GraalVM)
# For the Built-in (Free) slaves, the options are
# Java: 7, *8*, 11, or 13
# nodejs: 0.12, 8, or *10*
# python3: 3.5, or *3.6*
# soa: 12.1.3, or *12.2.1.1*
YAML Job Configuration Examples
Here are several examples of YAML job configurations:
Job Configuration | YAML Code |
---|---|
This configuration creates a job that runs Maven goals then archives the artifacts:
|
job: name: MyFirstYAMLJob vm-template: Basic Build Executor Template git: - url: "https://mydevcsinstance-mydomain/.../scm/employee.git" steps: - maven: goals: clean install pom-file: "employees-app/pom.xml" after: - artifacts: include: "employees-app/target/*" |
This configuration creates a job to run Docker steps that log in, build, and push an image to the OCI Registry:
|
job: name: MyDockerJob description: Job to build and push a Node.js image to OCI Registry vm-template: Docker and Node.js Template git: - url: "https://mydevcsinstance-mydomain/.../scm/NodeJSMicroDocker.git" steps: - docker-login: registry-host: "https://iad.ocir.io" username: "myoci/ociuser" password: My123Password - docker-build: source: "DOCKERFILE" options: "--build-arg https_proxy=https://my-proxy-server:80" image: image-name: "myoci/ociuser/mynodejsimage" version-tag: "1.8" registry-host: "https://iad.ocir.io" path: "mydockerbuild/" - docker-push: image: registry-host: "https://iad.ocir.io" image-name: "myoci/ociuser/mynodejsimage" version-tag: "1.8" - docker-image: options: "--all" |
This configuration creates a job that uses SQLcl to run SQL commands and a script:
|
job: name: RunSQLJob vm-template: Basic Build Executor Template steps: - sqlcl: username: dbuser password: My123Password connect-string: "myserver.oracle.com:1521:db1234" sql-text: "CD /home\nselect * from Emp" source: "SQLTEXT" - sqlcl: username: dbuser password: My123Password connect-string: "myserver.oracle.com:1521:db1234" sql-file: "sqlcl/simpleselect.sql" source: "SQLFILE" |
This configuration creates a job that runs Maven goals and archives the artifacts:
|
job: name: MyADFApp vm-template: JDev and ADF Build Executor Template auto: branch: "patchset_1" git: - url: "https://mydevcsinstance-mydomain/.../scm/ADFApp.git" branch: patchset_1 build-on-commit: true included-regions: "myapp/src/main/web/.*\\.java" excluded-regions: "myapp/src/main/web/.*\\.gif" clean-after-checkout: true before: - copy-artifacts: from-job: ADFDependecies artifacts-to-copy: adf-dependencies.war - oracle-maven: otn-login: "alex.admin@example.com" otn-password: My123Password steps: - maven: goals: clean install package pom-file: "WorkBetterFaces/pom.xml" after: - artifacts: include: "WorkBetterFaces/target/*.ear" settings: general: - discard-old: days-to-keep-build: 50 builds-to-keep: 10 software: - versions: version-map: Java: 7 triggers: - git-poll: cron-pattern: "0/30 5 * 2 *" advanced: - abort-after: hours: 1 - build-retry: build-retry-count: 5 git-retry-count: 10 |
How Do I Use YAML to Create or Configure a Pipeline?
You can use YAML for creating a new pipeline or configuring an existing one:
What Is the Format for a YAML Pipeline Configuration?
Here's a pipeline's configuration with the default values in YAML format:
pipeline:
name: "" # pipeline name - if omitted, name is constructed from repository and file name
description: "" # pipeline description
auto-start: true # automatically start pipeline if any job in pipeline is run
# if false, pipeline will start only if manually started
# or a trigger action item is activated
auto-start: # implied true
triggers-only: false # if true, autostart only for jobs that have no preceding jobs
allow-external-builds: true # jobs in pipeline can run independently while pipeline is running
periodic-trigger: "" # cron pattern with 5 elements (minute, hour, day, month, year)
# the pipeline is started (beginning with the Start item) periodically
triggers: # define trigger action items of periodic, poll, or commit types
# there may be one or more of each type
- periodic: # define trigger action item of periodic type; build pipeline every so often
name: "" # required, trigger name - must be unique trigger name; may not be "Start"
cron-pattern: "" # required, cron pattern specifying Minute Hour Day Month Year, e.g., "? 0 * * *"
- poll: # define trigger action item of poll type; poll repository every so often
name: "" # required, trigger name - must be unique trigger name; may not be "Start"
cron-pattern: "" # required, cron pattern as above
url: "" # required, git repository URL
branch: "" # required, git repository branch - trigger activated if changes detected in branch
exclude-users: "" # user identifier of committer to ignore
# if more than one user, use multi-line text, one user per line
trigger-when: INCLUDE # activate only if change to files in file-pattern; alternative EXCLUDE
# if EXCLUDE, activate only for change to files not in file-pattern
file-pattern: "" # file(s) to INCLUDE/EXCLUDE; may be ant or wildcard-style file/folder pattern
# if more than one pattern, use multi-line text, one pattern per line
# for example...
exceptions: "" # exceptions to file-pattern above
# if more than one exception file pattern, use multi-line text, one pattern per line
#
# Example of multi-line pattern - note that these lines can't have comments, as they would be part of text
file-pattern: |
README*
*.sql
- commit: # define trigger action item of commit type; automatically run pipeline on commit
name: "" # required, trigger name - must be unique trigger name; may not be "Start"
url: "" # required, git repository URL for local project repository
branch: "" # git repository branch name
# required: must specify branch or include/exclude/except branch patterns
include: "" # branch patterns to include; branch name, wildcard or regex
# if more than one pattern, use multi-line text, one pattern per line
exclude: "" # branch patterns to ignore; specify either include or exclude, not both
# if more than one pattern, use multi-line text, one pattern per line
except: "" # branch pattern exceptions to include or exclude above
# if more than one pattern, use multi-line text, one pattern per line
exclude-users: "" # user identifier of committer to ignore
# if more than one user, use multi-line text, one user per line
trigger-when: INCLUDE # activate only if change to files in file-pattern; alternative EXCLUDE
# if EXCLUDE, activate only for change to files not in file-pattern
file-pattern: "" # file(s) to INCLUDE/EXCLUDE; may be ant or wildcard-style file/folder pattern
# if more than one pattern, use multi-line text, one pattern per line
exceptions: "" # exceptions to file-pattern above
# if more than one exception file pattern, use multi-line text, one pattern per line
start: # required begins an array of job names, or parallel, sequential, or on groups
- JobName # this job runs first, and so on (start is a sequential group)
# Groups:
- parallel: # items in group run in parallel
- sequential: # items in group run sequentially
- on succeed,fail,test-fail: # items in group run sequentially if preceding job result matches condition
# can specify one or more of conditions:
# succeed (success), fail (failure), or test-fail (post-fail)
# Examples:
- parallel: # jobs A, B and C run in parallel, job D runs after they all finish
- A
- B
- C
- D
- on succeed: # if job D succeeds, E builds, otherwise F builds
- E
- on fail, test-fail:
- F
#
start: # Jobs that trigger pipelines can be specified.
- trigger: # A trigger section appears before the job(s) it triggers
- JobA # trigger is a "parallel" second - JobA and JobB are independent
- JobB
- JobName # This job runs first. It can be started when the pipeline is run, or if
# either of the trigger jobs JobA or JobB is built successfully.
# Triggers assume that auto-start is true.
start:
- A
- parallel:
- B
- sequential: # A trigger cannot appear in a parallel section
- trigger: # The jobs triggered are the next in sequence
- Trigger1
- C
- trigger: # But can appear anywhere in a sequential section
- Trigger2
- D
- trigger: # A trigger at the end of the start section is an independent graph
- sequential: # not connected to anything that precedes it.
- X
- Y
- Z
#
# on sections can "join" - like an if/then/else followed by something else
start:
- A
- on fail:
- F # If A fails, build F
- on test-fail:
- T # If the tests for A fail, build T
- on succeed:
- <continue> # If A succeeds, fall through to whatever follows the on conditions for A
- B # B is built if A, F, or T succeed
#
# A job run in parallel (or conditionally) can end the chain
start:
- A
- parallel: # Run B, C, and D in parallel
- B
- C
- end:
- D # There is no arrow from D
- E # E is run if B and C succeed
#
start:
- A
- on fail:
- end:
- F # If A fails, build F and end the pipeline
- on test-fail:
- T # If the tests for A fail, build T
- on succeed:
- <continue> # If A succeeds, fall through to whatever follows the on conditions for A
- B # B is built if A or T succeed
# -------------------------------------------------------------------------------
# Not all pipelines you can draw can be represented in hierarchical form as above.
# To allow a YAML definition of any pipeline graph, you can use a graph notation
# similar to the digraph representation supported by Dot/GraphViz.
# For example, the pipeline with triggers above can be written as a graph.
graph: # (Both graph: and start: cannot be used in the same pipeline.)
- JobA -> JobName # There is a link from JobA to JobName
- JobB -> JobName # There is a link from JobB to JobName
- <Start> -> JobName # There is a link from Start to JobName
# The representation <Start> distinguishes the special "Start" node that
# appears in every pipeline from a job named Start.
#
# Conditional links can be represented using the ? and a list of one or more conditions.
# For example, the partial pipeline above beginning with 'parallel' can be represented as:
graph:
- <Start> -> A
- <Start> -> B
- <Start> -> C
- A -> D
- B -> D
- C -> D
- D -> E ? succeed # If D succeeds, E is built
- D -> F ? fail, test-fail # If D fails or tests fail, F is built
# "succeed" is the default when no ? is specified.
#
# Any combination of succeed (success), fail (failure), or test-fail (post-fail)
# can be written in a comma-separated list after the question mark.
#
# Not every graph that can be specified in this way is a valid pipeline.
# For example, graphs with cycles are not allowed.
#
# "Joins" like the A, B, C converging on D above only work (D gets built)
# if all of A, B, and C succeed. If, for example, B fails, D will not be built.
# However, joins on nodes that are directly downstream from [Start] are a
# special case. If any job triggers these nodes, they will be run.
# This special case allows the triggers: section to work as expected.
# (This is not new behavior in 22.01.0.)
graph:
- <Start> -> A
- <Start> -> B
- A -> C # Links from A to C and B to C are to the same node (and job) C
- B -> C
# on the other hand...
graph:
- <Start> -> A
- <Start> -> B
- A -> C # Links from A to C and B to C$2 are to the same job C
- B -> C$2 # but to different nodes
# In other words, the job C appears in two different places in the pipeline
YAML Pipeline Configuration Examples
Here are some examples of different YAML pipeline configurations:
YAML Definition | Pipeline Configuration |
---|---|
pipeline: name: My Pipeline description: YAML pipeline configuration auto-start: true allow-external-builds: true start: - Job 1 - Job 2 - Job 3 |
![]() Job 2 runs after Job 1 completes successfully. Then Job 3 runs after Job 2 completes successfully. This pipeline starts if any job in the pipeline is run. Jobs in the pipeline can be run independently while the pipeline is running. |
pipeline: name: My Pipeline auto-start: true start: - Job 1 - parallel: - Job 2 - Job 3 - Job 4 - Job 5 |
![]() Jobs 2, 3, and 4 run in parallel after Job 1 completes successfully. Job 5 runs after the three parallel jobs complete successfully. The pipeline will start if any job in the pipeline runs. |
pipeline: name: My Pipeline start: - Job 1 - Job 2 - parallel: - Job 3 - Job 4 - sequential: - Job 5 - Job 6 - Job 7 |
![]() Job 2 runs after Job 1 completes successfully. Jobs 3, 4, and 5 run in parallel after Job 2 completes successfully. Job 6 runs after Job 5 completes successfully. Job 7 runs after Jobs 6, 3, and 4 complete successfully. |
pipeline name: My Pipeline start: - Job 1 - on succeed: - Job 2 - on fail: - Job 3 |
![]() If Job 1 runs successfully, Job 2 runs. If Job 1 runs and fails, Job 3 runs. |
pipeline: start: - Job 1 - on succeed: - Job 2 - on test-fail: - Job 3 - on fail: - Job 3 |
![]() If Job 1 runs successfully, Job 2 is run. If Job 1 runs successfully but fails tests or any post build action, or if Job 1 fails, Job 3 is run. Job 3 won't run if Job1 completes successfully. |
Set Dependency Conditions in Pipelines Using YAML
When you create a pipeline that includes a dependency between a parent and a child job, by default, the build of the child job will run after the parent job’s build completes successfully. You can configure the dependency to run a build of the child job after the parent job’s build fails too, either by using the pipeline designer or by setting an "on condition" in YAML to configure the result condition.
The pipeline designer supports Successful, Failed, or Test Failed conditions (see Configure the Dependency Condition). YAML supports additional conditions you can use. Here they are, with the build results they are mapped to:
- "succeed" and "success" map to a "SUCCESSFUL" build result
- "fail" and "failure" map to a "FAILED" build result
- "test-fail" and "post-fail" map to a "POSTFAILED" build result
None of these conditions match when a job is aborted, canceled, or restarted, so the pipeline never proceeds beyond that job.
See YAML Pipeline Configuration Examples to learn more about using and setting some of these dependency conditions in YAML. The fourth example shows how to use the "on succeed" and "on fail" settings. The fifth example shows how to use the "on succeed", "on fail", and "on post-fail" settings.
You can use the new public API to view the pipeline instance log to see what happened with the builds in the pipeline, after the fact. Use this format to get the log:
GET pipelines/{pipelineName}/instances/{instanceId}/log
Define and Use Triggers in YAML
Triggers are artifacts in a pipeline that aren't jobs but are just nodes, called action items. A job or multiple jobs can be used as triggers, but there is an overhead cost associated with such use. Instead of using trigger jobs, you can specify a new category of action items in the YAML pipeline configuration to define triggers. Trigger action items can start up on their own and then trigger the rest of the pipeline. This is a YAML only feature. To understand triggers, it helps to explore action items.
What Are Action Items?
Action items, a category of special-purpose executable pipeline items, are used to automate actions when jobs and tasks are too heavyweight. An action is a short-running activity that's performed locally in the build system. This special-purpose long-lived executable entity appears as a node in a pipeline and can be started by a user action, an automated action, or by entry from an upstream item.
Actions are single-purpose where each action does one thing. When an action is started, it performs some action, and completes with a result condition. An action is configurable but not programmable and should never contain user-written code in any form. An action has a category, like “trigger”, and a sub-category, like “periodic” or “polling”, that defines the item type. Each action has a name that is unique within a pipeline. If the name isn't configured, a default name will be supplied based on the item type, for example, “PERIODIC-1”. The name “Start” is reserved. The name is required and represents a configuration of the action.
What Is a Trigger?
A trigger is an action item that is based on some user or automated event that starts executing a pipeline at a specific point beginning with the nodes directly downstream of the trigger. If a trigger has upstream connections, and is invoked from an upstream connection, the trigger acts as a pass-through. It completes immediately and, if it has any downstream connections, the downstream items are initiated.
There are several subcategories of triggers:
- Periodic – The trigger is started periodically, based on a cron schedule.
- Polling – The trigger is started if SCM polling detects that commits have been pushed to a specified repository and branch since the last poll. Polling is based on a cron schedule. The repository URL and branch name are set as downstream parameter values with user-configurable names.
- Commit – The trigger is started if a commit is pushed to a specified local project repository and a specified set of branches. The repository URL and branch name are set as downstream parameter values with user-configurable names.
- Manual – The trigger is started manually. Start, the default manual trigger, is present in every pipeline. If Start is the only manual trigger, the pipeline starts there and executes the next downstream job(s). If a pipeline includes a manual trigger job, it can be started in the UI and execute its next downstream job, bypassing Start. If the pipeline has multiple trigger jobs, the user needs to choose which of them to initiate the pipeline run with.
Periodic Triggers
Here's an example that shows how to use a periodic trigger:
pipeline:
name: PeriodicPipeline
description: "Trigger defined in periodic, used in start"
auto-start:
triggers-only: true
allow-external-builds: false
triggers:
- periodic:
name: MidnightUTC
cron-pattern: "0 0 * * *"
start:
- trigger:
- <MidnightUTC>
- JobA3
Notice that the periodic trigger is defined with a name and a cron pattern. The reference to the trigger action item is enclosed in angle brackets, <MidnightUTC> in this case, to differentiate it from a job, such as JobA3.
-
In the Pipelines tab on the Builds page, trigger action items are represented as squared off blocks, like the Start item or the <TenMinutes> item. The pipeline in this diagram was started by the action item with the periodic trigger <TenMinutes>. This trigger runs the pipeline every ten minutes.
Description of the illustration pipeline-trigger-start.pngNotice that a solid line goes from it to JobA1 and JobA2, but a dotted line goes from Start through its trigger to its downstream jobs. This is so, because the pipeline wasn't initiated from Start. The trigger item and the executed jobs are shaded, indicating the pipeline's execution path.
-
You could manually start the pipeline too, as this diagram shows.
Description of the illustration pipeline-manual-start.pngIn this case, the execution passes from Start, the default trigger, to JobA1 and then to JobA2. The graphic representation shows the execution path with solid lines. The <TenMinutes> periodic trigger job, outlined with dotted lines, isn't shaded because it wasn't executed. The dotted lines from it to its downstream jobs further indicate an execution path not taken.
Polling Triggers
This polling pipeline only runs at midnight UTC, as specified by the cron pattern. Additional parameters can be used too. See What Is the Format for a YAML Pipeline Configuration?.
pipeline:
name: PollingPipeline
auto-start: false
triggers:
- poll:
name: Poller
cron-pattern: "0 0 * * *"
url: <git-repo-url>
branch: main
start:
- <Poller>
- A
When the pipeline is started manually, the execution flow goes through the trigger action item to job A. In the Pipeline Designer, the trigger has no hue and has a dotted line border. There are dotted lines from Start to the trigger item to job A. When the polling mechanism detects a change, the pipeline is started by the trigger. This is shown with a dotted line from Start to the trigger item, the trigger item has a dark hue, and there is a solid line from the trigger item to job A.
Commit Triggers
A commit trigger automatically runs when a commit happens.
pipeline:
name: CommitPipeline
auto-start: false
triggers:
- commit
name: OnCommit
url: <git-repo-url>
branch: main
start:
- <OnCommit>
- A
Additional parameters can be used too. See What Is the Format for a YAML Pipeline Configuration?.
Control How a Pipeline Is Automatically Started
The auto-start option automatically starts a pipeline if any job in pipeline is run. The default setting is "true". If the option is set to "false", the pipeline will start only if it is manually started or if a trigger action item is activated. Starting pipelines in the middle can be problematic, since preceding or parallel steps in the pipeline could set up conditions for follow-on steps. This behavior can be controlled using the auto-start option.
The first pipeline job that follows Start is a trigger only if it is the only job triggered by Start. Either the entire pipeline or only parts of the pipeline that have defined trigger jobs will automatically start.
Logs for Pipelines Manually Started by a Trigger Job
A triggered pipeline starts when the trigger job begins executing. For a pipeline that contains two jobs, Job A and Job B, where the Job A triggers the pipeline, the pipeline starts when the trigger job starts. The pipeline log reflects this:
Pipeline started
Job A started
Job A ended
Job B started
Job B ended
Pipeline ended
Control How a Pipeline Is Automatically Started
The auto-start option automatically starts a pipeline if any job in pipeline is run. The default setting is "true". If the option is set to "false", the pipeline will start only if it is manually started or if a trigger action item is activated. Starting pipelines in the middle can be problematic, since preceding or parallel steps in the pipeline could set up conditions for follow-on steps. This behavior can be controlled using the auto-start option.
The first pipeline job that follows Start is a trigger only if it is the only job triggered by Start. Either the entire pipeline or only parts of the pipeline that have defined trigger jobs will automatically start.
Logs for Pipelines Manually Started by a Trigger Job
A triggered pipeline starts when the trigger job begins executing. For a pipeline that contains two jobs, Job A and Job B, where the Job A triggers the pipeline, the pipeline starts when the trigger job starts. The pipeline log reflects this:
Pipeline started
Job A started
Job A ended
Job B started
Job B ended
Pipeline ended