6 Build Applications and Deploy

After uploading the source code files to Git repositories, you can use the Builds Builds page to create and configure build jobs and pipelines, run builds and generate artifacts, and then use build steps to deploy those artifacts to Oracle Java Cloud Service, Oracle Application Container Cloud Service, Oracle Cloud Applications, or Visual Builder.

Configure and Run Project Jobs and Builds

Oracle Visual Builder Studio (VB Studio) includes continuous integration services to build project source files. You configure the builds from the Builds page.

The Builds page, also called the Jobs Overview page, displays information about all the project's build jobs and provides links to configure and manage them.

What Are Build VMs and Build VM Templates?

A Build Virtual Machine (VM) is an OCI Compute or OCI Compute Classic VM that runs builds of jobs defined in VB Studio projects. A Build VM Template defines the operating system and software installed on Build VMs.

The organization administrator first creates Build VM templates with software that the organization's members use. After creating the templates, the organization administrator allocates Build VMs to each Build VM template. When the organization's members create jobs, they simply associate the appropriate Build VM template with each job.

To configure a job to use software, such as Node.js or Docker, assign the Build VM template that includes the software needed to the job. If you're a project user, you'll assign the appropriate Build VM template to the job from the job's configuration page.

When a build runs:

  1. The build executor checks the job's Build VM template and then looks for the VM that's allocated to the template:
    • If a VM is available, the build executor immediately runs the build on it.
    • If all VMs are busy running builds of other jobs using the same Build VM template, the build executor waits until a VM becomes available and then runs the current job's build on it.
  2. When a build runs on a VM for the first time or after a VM wakes up after its sleep timeout period, the build executor installs the software defined in the Build VM template before it runs the build. This takes time.
  3. Depending on how the job is configured, after installing the software, the executor clones the job’s Git repositories to the VM, runs the defined build steps, creates artifacts, and performs post-build steps.
  4. After the build completes, the executor copies any generated artifacts to the OCI Object Storage bucket or the OCI Object Storage Classic container defined by the organization administrator.
  5. The Build VM waits for a period of time for any queued builds. If no builds run in the wait time period, the Build VM uninstalls its software and stops.

What is a Job and a Build?

A job is a configuration that defines your application's builds. A build is the result of a job’s run.

A job defines where to find the source code files, how and when to run builds, and the software and environment required to run builds. When a build runs, it packages application archives that you can be deployed to a web server. A job can have hundreds of builds, each generating its own artifacts, logs, and reports.

Build Concepts and Terms

Here are some terms that this documentation uses to describe the build features and components.

Term Description
Build System Software that automates the process of compiling, linking and packaging the source code into an executable form.
Build executor A basic block that enables a build to run on a Build VM. Each build uses one build executor, or one Build VM.
Build artifact A file generated by a build. It could be a packaged archive file, such as a .zip or .ear file, that you can deploy to a build server.
Trigger A condition to run a build.
Software Versions in the Software Catalog

There are multiple versions of some software, such as Node.js and Java, listed in the Software Catalog. This software is referred to as software packages.

The version number of a package can be categorized into two: the major version and the minor version. If a software's version is 1.2.3, then 1 is its major version and 2.3 is its minor version. In a software's tile, the major version number is displayed in the title of the package. In Configure Software page, the number shown in Version is the installed version, which includes both major and minor versions.

Here's an example. In this image, Node.js 0.12, 8, 10, and 12 are shown in the software catalog. In the Node.js 12 tile, 12 is the major version and 3.1 is its minor version. The installed version of the software is 12.3.1.

Description of odcs_software_catalog_version.png follows
Description of the illustration odcs_software_catalog_version.png

When a new minor version of a software package is available in the Software Catalog, all VM templates using that software package are updated automatically. For example, assume that Node.js 10.13 is available in the Software Catalog for the Node.js 10 package. When Node.js 10.15 is made available in the Software Catalog, all VM templates using the Node.js 10 package update automatically to use Node.js 10.15. If there’s an incompatibility between the upgraded software and other installed software of the VM template, an error is reported with suggestions about the cause of the error.

When a new major version of a software package is available in the catalog, VM templates using the older versions of the software package aren't updated automatically. The new major version of the software is added to the catalog as a separate package. For example, when Node.js 12 is available in the Software Catalog, all VM templates using Node.js 0.12, Node.js 8, or Node.js 10 aren’t updated automatically. To use the new version, you must manually update the VM templates to use the new package.

When a major version of a software is removed from the catalog, all VM templates using that software version are updated automatically to use the next higher version. For example, when Node.js 8 phases out, VM templates using Node.js 8 will be automatically updated to use Node.js 10.

Create and Manage Jobs

From the Builds page, you can create jobs that run builds and generate artifacts that you can deploy:

Action How To

Create a blank job

  1. In the navigation menu, click Builds Builds.
  2. In the Jobs tab, click + Create Job.

  3. In the New Job dialog box, in Name, enter a unique name.

  4. In Description, enter the job's description.

  5. In Template, select the Build VM template.

  6. Click Create.

Copy an existing job

There may be times that you want to copy parameters and a job configuration from one job to another. You can do that when you create a job. You cannot copy the configuration of an existing job to another existing job.

After you create the new job, you can modify the copied parameters and configuration:

  1. In the navigation menu, click Builds Builds.
  2. In the Jobs tab, click + Create Job.

  3. In the New Job dialog box, in Name, enter a unique name.

  4. In Description, enter the job's description.

  5. Select the Copy From Existing check box.

  6. In Copy From, select the source job.
  7. In Template, select the Build VM template.

  8. Click Create.

Create a job that accepts build parameters and will be associated with a merge request
  1. In the navigation menu, click Builds Builds.
  2. In the Jobs tab, click + Create Job.

  3. In the New Job dialog, in Name, enter a unique name.

  4. In Description, enter the job's description.

  5. Select the Use For Merge Request check box.

  6. In Template, select the Build VM template.

  7. Click Create.

Create a job using YAML In VB Studio, you can use a YAML file to create a job and define its configuration. The file is stored in a project's Git repository. See Create and Configure Jobs and Pipelines Using YAML.

Configure a job

The job configuration page opens immediately after you create a job. You can also open it from the Jobs tab. Click Configure the Gear icon.

Run a build of a job

In the Jobs tab, click Build Now the Build icon.

Delete a job

In the Jobs tab, click Delete Delete.

Configure a Job

You can create, manage, and configure jobs from the Jobs tab on the Builds page.

To open a job’s configuration page, go to the Jobs tab on the Builds page and click the job’s name. In the Job Details page, click Configure the Gear icon.

Configure a Job's Privacy Setting

Mark a job as private to restrict who can see or edit a job's configuration, or run its build:

  1. In the navigation menu, click Project Administration Gear.
  2. Click Builds.
  3. Click the Job Protection tab.
  4. From the jobs list, select the job.
  5. Select the Private option.
  6. In Authorized Users, add yourself.
    To add other users, select their names.
  7. Click Save.

A private job shows a Lock Lock icon in the jobs list on the right side of the Job Protection page, in the Jobs tab of the Builds page, and in the pipelines.

A private job must be run manually. It won't run if a non-authorized user tries to run the job directly, through an SCM/periodic trigger or a pipeline.

Access Project Git Repositories

You can configure a job to access a project’s Git repositories and their source code files:

  1. Open the job’s configuration page.
  2. Click Configure the Tools icon, if necessary.
  3. Click the Git tab.
  4. From the Add Git list, select Git.
  5. In Repository, select the Git repository to track.

    When you created the job, if you selected the Use for Merge Request check box, the field is automatically populated with the ${GIT_REPO_URL} value. Don’t change it.

  6. In Branch, select the branch name in the repository to track. By default, master is set.

    When you created the job, if you selected the Use for Merge Request check box, Branch is automatically populated with the ${GIT_REPO_BRANCH} value. Don’t change it unless you don’t want to link the job with a merge request.

  7. Click Save.

You can specify multiple Git repositories. If you do, set Local Checkout Directory for all Git repositories.

Trigger a Build Automatically on SCM Commit

You can configure a job to monitor a Git repository and trigger a build automatically each time a commit is pushed:

  1. Open the job’s configuration page.
  2. Click Configure the Tools icon, if necessary.
  3. Click the Git tab and either use the dropdown to select the repository you want to monitor or type the name of the repository in the entry field.
  4. For the Git repository you want to monitor, select the Automatically perform build on SCM commit check box.
  5. To include or exclude files when tracking changes in the repository, see Include or Exclude Files to Trigger a Build.
  6. To exclude users whose commits to the repository don’t trigger builds, in Excluded User, enter the list of user names.
  7. Click Save.
Trigger a Build Automatically According to an SCM Polling Schedule

SCM polling enables you to configure a job to periodically check the job’s Git repositories for any commits pushed since the job’s last build. If updates are found, it triggers a build. You can configure the job and specify the schedule in Coordinated Universal Time (UTC), the primary time standard by which the world regulates clocks and time. If you’re not a Cron expert, use the novice mode and set the schedule by specifying values. If you’re a Cron expert, use the Expert mode.

You can specify the schedule using Cron expressions:

  1. Open the job’s configuration page.
  2. In the Git tab, add the Git repository.

    To include or exclude files when tracking changes in the repository according to a Cron expression, see Include or Exclude Files to Trigger a Build.

  3. Click Settings the Gear icon.
  4. Click the Triggers tab.
  5. Click Add Trigger and select SCM Polling Trigger.
  6. To use the expert mode, select the Expert mode check box and enter the schedule in the text box.

    The default pattern is 0/30 * * * *, which runs a build every 30 minutes.

    After you edit the expression, it’s validated immediately when the cursor moves out of the text box. Note that other fields of the section aren’t available when the check box is selected.

  7. To use the novice mode, deselect the Expert mode check box and specify the schedule information. The page displays the generated Cron expression next to the Expert mode check box.
  8. To use the novice mode, deselect the Expert mode check box and specify the schedule information in Minute, Hour, Day of the Month, Month, and Day of the Week.

    Click Toggle Recurrence to add or remove 0/or 1/ at the beginning of the value in the Cron expression.

    The page displays the generated Cron expression next to the Expert mode check box.

    Tip:

    To check the job’s Git repositories every minute, deselect all check boxes. Remember that this may consume large amounts of system resources.
  9. If necessary, in Comment, enter a comment.
  10. To view and verify the build schedule of the next ten builds, from the timezone drop-down list, select your time zone and then click View Schedule.
  11. Click Save.

To see the SCM poll log of the job after the build runs, in the job's details page or the build's details page, click SCM Poll Log SCM Poll Log.

Generate Cron Expressions

You can use Cron expressions to define periodic build patterns.

For more information about Cron, see http://en.wikipedia.org/wiki/Cron.

You can specify the Cron schedule information in the following format:

MINUTE HOUR DOM MONTH DOW

where:

  • MINUTE is minutes within the hour (0-59)

  • HOUR is the hour of the day (0-23)

  • DOM is the day of the month (1-31)

  • MONTH is the month (1-12)

  • DOW is the day of the week (1-7)

To specify multiple values, you can use the following operators:

  • * to specify all valid values

  • - to specify a range, such as 1-5

  • / or */X to specify skips of X's value through the range, such as 0/15 in the MINUTE field for 0,15,30,45

  • A,B,...,Z can be used to specify multiple values, such as 0,30 or 1,3,5

Include or Exclude Files to Trigger a Build

When you've configured a job to monitor a Git repository, you can use fileset patterns to include or exclude files when tracking changes in the repository. Then each time a change is committed, only changes to files that match these patterns determine whether a build is triggered or not.

Fileset patterns work as filters to include or exclude files when tracking changes in a repository. They take effect only when you've enabled a build to be triggered either on each SCM commit or according to a polling schedule. Once these settings are enabled, each time changes are committed to the repository, the filter is applied and a build either runs or not based on the specified filter.
  1. Open the job’s configuration page.
  2. Click Configure the Tools icon, if necessary.
  3. Click the Git tab.
  4. Expand Advanced Git Settings and select either Include or Exclude.
    • Click Include to specify a list of files and directories in the repository that you want to track for changes. By default, all files are included for tracking (**/*), meaning changes to any file or directory in the repository will trigger a build.

      To change the default configuration, select Include and specify the fileset to be included in Directories/Files. You can use regular expressions (regex) or glob patterns to specify the fileset. Each entry must be separated by a new line.

      You can extend this configuration to specify Exceptions to the included fileset. If changes occur only in the fileset specified as an exception, a build won't run.

      Here are some glob pattern examples:
      Desired Outcome In Directories/Files, enter: In Exceptions, enter: Result
      Trigger a build following changes to .html, .jpeg, or .gif files in the myapp/src/main/web/ directory:

      myapp/src/main/web/*.html

      myapp/src/main/web/*.jpeg

      myapp/src/main/web/*.gif

      Leave blank A build runs when a .html, .jpeg, or .gif file is changed in the myapp/src/main/web/ directory.
      Trigger a build following changes to .java files, but not .html files: *.java *.html A build runs when any .java file is changed, except when all changed files are .html files.
      Trigger a build following changes to .java files, but not test.java: *.java test.java A build runs when any .java file is changed, except when test.java is the only changed file.
    • Click Exclude to specify a list of files and directories in the repository that you don't want to track for changes. If all changes are only in the specified files, a build won’t be triggered. By default, no files are excluded, meaning all files and directories are tracked and therefore, changes to any file or directory in the repository will trigger a build.

      To change the default configuration, select Exclude and specify the fileset to be excluded in Directories/Files. You can use regular expressions (regex) or glob to specify an excluded fileset. Each entry must be separated by a new line.

      Optionally, specify files or directories within the excluded fileset that you want to include as Exceptions. If changes occur in the fileset specified as an exception, a build will be triggered.

      Here are some glob pattern examples:
      Desired Outcome In Directories/Files, enter: In Exceptions, enter: Result
      Don’t trigger a build when only .java files are changed: *.java Leave blank A build won't run when all changed files are .java files, but changes in any other file (say, test.txt and test.html) triggers a build.
      Don’t trigger a build when .java files in the /myapp/mobile/ directory are changed, with the exception of test.java: /myapp/mobile/*.java test.java A build won't run when all changes are in .java files other than test.java in the /myapp/mobile/ directory. But a build runs when test.java in the /myapp/mobile/ directory is the only changed file.
      Don't trigger a build for changes to any file, except .sql files: **/* *.sql A build runs only when .sql files are changed.
      Don’t trigger a build when only .html, .jpeg, or .gif files in the myapp/src/main/web/ directory are changed:

      myapp/src/main/web/*.html

      myapp/src/main/web/*.jpeg

      myapp/src/main/web/*.gif

      Leave blank A build won't run when only .html, .jpeg, or .gif files in the myapp/src/main/web directory are changed.
      Don’t trigger a build when .gitignore files are changed: *.gitignore Leave blank A build won't run when the only changed files are .gitignore files.
  5. Click Save.
Use External Git Repositories

If you use an external Git repository to manage source code files, you can configure a job to access its files when a build runs.

If the external Git repository is a public repository, mirror it in the project or use its direct URL in the job configuration. If the external Git repository is a private repository, you must mirror it in the project. See Mirror an External Git Repository.

To configure a job to use an external Git repository:

  1. Open the job’s configuration page.
  2. Click Configure the Tools icon, if necessary.
  3. Click the Git tab.
  4. From the Add Git list, select Git.
  5. If the external Git repository is mirrored with the project, in Repository, select the repository name. Note that the build executor uses the internal address URL of the mirrored repository.
    If the external Git repository isn't mirrored with the project, enter the repository's direct URL. Don't enter the direct URL of a private repository.
  6. Configure other fields of the page and click Save.
To trigger a build on an update to the external Git repository, set up SCM polling according to the frequency of commits. VB Studio can't trigger a build immediately on an update to the external Git repository.
Before you set SCM polling, note that if you use the internal address URL of a mirrored repository, there's a wait time of at least 5 minutes. If you use the external address URL or the direct URL of the repository, there's a wait time of at least 1 minute. Remember that polling every few minutes consumes large amounts of system resources.
Access Files of a Git Repository's Private Branch

To access a Git repository's private branch, configure the job to use SSH:

  1. On the computer that you'll use to access the Git repository, generate a SSH key pair and upload its private key to VB Studio. See Upload Your Public SSH Key. Make sure that the private key on your computer is accessible to the Git client.
    Ignore this step if you've already uploaded the SSH public key.
  2. Copy the Git repository’s SSH URL.

    On the Git page, from the Repositories drop-down list, select the Git repository. From the Clone drop-down list, click Copy to clipboard the Copy icon and copy the SSH URL:

  3. Open the job’s configuration page.
  4. Click Configure the Tools icon, if necessary.
  5. Click the Git tab.
  6. From the Add Git list, select Git.
  7. In Repository, paste the SSH URL of the Git repository.
  8. In Branch, select the private branch.
  9. Click the Before Build tab.
  10. Click Add Before Build Action and select SSH Configuration.
  11. In Private Key and Public Key, enter the private and the public key of your SSH Private-Public key pair.
    Leave the Public Key empty to use fingerprint.
  12. In Pass Phrase, enter the pass phrase of your SSH Private-Public key pair. Leave it empty if the keys aren’t encrypted with a pass phrase.
  13. Continue to configure the job, as desired.
  14. Click Save.
Publish Git Artifacts to a Git Repository

Git artifacts, such as tags, branches, and merge results can be published to a Git repository as a post-build action:

  1. Open the job’s configuration page.
  2. Click Configure the Tools icon.
  3. In the Git tab, add the Git repository where you want to publish Git artifacts.
  4. Click the After Build tab.
  5. Click Add After Build Action and select Git Publisher.
  6. To push Git artifacts to the Git repository only if the build succeeds, select the Publish only if build succeeds check box.
  7. To push merges back to the target remote name, select the Merge results check box.
  8. To push a tag to the remote repository, in Tag to push, specify the Git repository tag name. You can also use environment variables. In Target remote name, specify the target remote name of the Git repository where the tag is pushed. By default, origin is used.

    The push fails if the tag doesn’t exist. Select the Create new tag check box to create the tag and enter a unique tag name.

  9. To push a branch to the remote repository, in Branch to push, specify the Git repository branch name. You can also use environment variables. In Target remote name, specify the target remote name of the Git repository where the branch is pushed. By default, origin is used.
  10. Click Save.
Advanced Git Options

When you configure the Git repositories of a job, you can also configure the job with some advanced Git options, such as change the remote name of the repository, set the checkout directory in the workspace, and whether to clean the workspace before a build runs.

You can perform these configuration actions from the Git tab of the job configuration page:

Action How To

Change the remote name of a repository

For the Git repository, expand Advanced Repository Options, and specify the new name in Name. The default remote name is origin.

Specify a reference specification of a repository

A reference repository helps to speed up the builds of the job by creating a cache in the workspace and hence reducing the data transfer. When a build runs, instead of cloning the Git repository from the remote, the build executor clones it from the reference repository.

To create a reference specification of a Git repository, expand Advanced Repository Options, and specify the name in Ref Spec.

Leave the field empty to create a default reference specification.

Specify a local checkout directory

The local checkout directory is a directory in the workspace where the Git repository is checked out when a build runs.

To specify the directory of a Git repository, expand Advanced Repository Options, and specify the path in Local Checkout Directory. If left empty, the Git repository is checked out on the root directory of the workspace.

Include or exclude a list of files and directories to determine whether to trigger a build or not
When you've enabled a build to be triggered either on each SCM commit or according to a polling schedule, expand Advanced Git Settings and select Include or Exclude.
  • To include a list of files and directories to track for changes and trigger a build, click Include and specify a fileset in Directories/Files. Default is **/*, indicating that all files and directories in the repository are tracked and changes to any file or directory will trigger a build.

    If you don't want a build to be triggered for some files and directories in the included fileset, specify Exceptions. For example, to trigger a build following changes to all but .java files, enter **/* in Directories/Files and *.java in Exceptions.

  • To exclude a list of files and directories from tracking and prevent a build from being triggered, click Exclude and specify a fileset in Directories/Files. If all changes occur only in the specified files or directories, a build won’t run. Default is an empty list, indicating no files are excluded.

    If you want a build to be triggered for some files and directories in the excluded fileset, specify Exceptions. For example, to trigger a build only when .sql files are changed, enter **/* in Directories/Files and *.sql in Exceptions.

For more examples, see Include or Exclude Files to Trigger a Build.

Check out the remote repository’s branch and merge it into a local branch

Expand Advanced Git Settings, in Merge another branch, specify the branch name to merge to. If specified, the build executor checks out the revision to build as HEAD on this branch.

If necessary, in Checkout revision, specify the branch to checkout and build as HEAD on the value of Merge another branch.

Configure Git user.name and user.email variables

Expand Advanced Git Settings and in Config user.name and Config user.email, specify the user name and the email address.

Merge to a branch before a build runs

Expand Advanced Git Settings and select the Merge from another repository check box.

In Repository, enter or select the name of the repository to be merged. In Branch, enter or select the name of the branch to be merged. If no branch is specified, the default branch of the repository is used.

The build runs only if the merge is successful.

Prune obsolete local branches before running a build

Expand Advanced Git Settings and select the Prune remote branches before build check box.

Skip the internal tag

When a build runs, the build executor checks out the Git repository to the local repository of the workspace and applies a tag to it. To skip this process, expand Advanced Git Settings and deselect Skip internal tag check box.

Remove untracked files before running a build

Expand Advanced Git Settings and select the Clean after checkout check box.

Retrieve sub-modules recursively

Expand Advanced Git Settings and select the Recursively update submodules check box.

Display commit’s author in the log

By default, the Git change log shows the commit’s Committer . To display commit’s Author, expand Advanced Git Settings and select the Use commit author in changelog check box.

Delete all files of the workspace before a build runs

Expand Advanced Git Settings and select the Wipe out workspace before build check box.

View SCM Changes Log

The SCM Change log displays the files that were added, edited, or removed from the job’s Git repositories before the build was triggered.

You can view the SCM changes log from the job’s details page and a build’s details page. The Recent SCM Changes page that you open from the job’s details page displays SCM commits from last 20 builds in the reverse order. The SCM Changes page that you open from a build’s details page displays SCM commits that happened after the previous build.

The log shows build ID, commit messages, commit IDs, and affected files.

Trigger a Build Automatically on a Schedule

You can configure a job to run builds on a specified schedule specified in Coordinated Universal Time (UTC), the primary time standard by which the world regulates clocks and time:

You can specify the schedule using Cron expressions. If you’re not a Cron expert, use the novice mode and set the schedule by specifying values. If you’re a Cron expert, use the Expert mode.

  1. Open the job’s configuration page.

  2. Click Settings the Gear icon.

  3. Click the Triggers tab.

  4. Click Add Trigger and select Periodic Build Trigger.

  5. To use the expert mode, select the Expert mode check box, and enter the schedule in the text box.

    The default pattern is 0/30 * * * *, which runs a build every 30 minutes.

    After you edit the expression, it’s validated immediately when the cursor moves out of the text box. Note that other fields of the section aren’t available when the check box is selected.

  6. To use the novice mode, deselect the Expert mode check box and specify the schedule information. The page displays the generated Cron expression next to the Expert mode check box.

  7. To use the novice mode, deselect the Expert mode check box and specify the schedule information in Minute, Hour, Day of the Month, Month, and Day of the Week.

    Click Toggle Recurrence to add or remove 0/ or 1/ at the beginning of the value in the Cron expression.

    The page displays the generated Cron expression next to the Expert mode check box.

  8. If necessary, in Comment, enter a comment.

  9. To view and verify the build schedule of the next ten builds, from the timezone drop-down list, select your time zone and then click View Schedule.

  10. Click Save.

Use Build Parameters

Using build parameters, you can pass additional information when a build runs that is not available at the time of job configuration.

You can configure a job to use a parameter and its value as an environment variable or through variable substitution in other parts of the job configuration. When a build runs, a Configure Parameters dialog box opens so you can enter or change the default values of the parameters:

  1. Open the job’s configuration page.
  2. Click Configure the Tools icon.
  3. Click the Parameters tab.
  4. From the Add Parameter drop-down list, select the parameter type.

    You can add these types of build parameters:

    Use this parameter type ... To:

    String

    Accept a string value from the user when a build runs. The parameter field appears as a text box in the Configure Parameters dialog box.

    Password

    Accept a password value from the user when a build runs. The parameter field appears as a password box in the Configure Parameters dialog box.

    Boolean

    Accept true or false as input from the user when a build runs. The parameter field appears as a check box in the Configure Parameters dialog box.

    Choice

    Accept a value from a list of values when a build runs. The parameter field appears as a drop-down list in the Configure Parameters dialog box. The first value is the default selected value.

    Merge Request

    Accepts string values for the Git repository URL, the Git repository branch name, and the merge request ID as input. The parameter fields appear as a text box in the Configure Parameters dialog box.

    Use this parameter if you want to link an existing job with a merge request.

  5. Enter values, such as name, default value, and description.

    Note:

    Parameter names must contain letters, numbers or underscores only. They can't begin with a number and they aren't case-sensitive (the names "job", "JOB", and "Job" are all treated the same).

    You can't use hyphens in build parameter names. When the build system encounters a script or a command with a hyphenated build parameter name in a UNIX shell build step, it removes the portion of the name preceding the hyphen. If you try to use a hyphen in a build parameter name in a job, you won't be able to save the job configuration that includes it.

    In addition, you shouldn't use an underscore by itself or any of the system or other environmental variable names listed in Reserved Words that Shouldn't Be Used in Build Parameter Names as build parameter names. There could be unintended consequences if you do.

  6. Click Save.

For example, if you want a job to change the default values of Gradle's version, OCI username, and OCI user's password when a build runs, create the Choice, String, and Password build parameters to accept the values. The Password parameter's value isn't displayed in the input field.

Use the $BUILD_PARAMETER format when you're using build parameters. (The ${BUILD_PARAMETER} format can be used too.) For example, this screen shot shows the Gradle version, OCI username, and OCI password parameters used in the build step fields of a job. Notice that the password parameter's variable name isn't displayed:

When a build runs, the Configure Parameters dialog box opens where you can enter or change the default values of parameters. All parameter values, except the Password parameter's value, display as string in the dialog box (and subsequently in the build log):

If you selected the Use for Merge Request check box while creating the job, GIT_REPO_URL, GIT_REPO_BRANCH, and MERGE_REQ_ID Merge Request parameters are automatically added to accept the Git repository URL, the Git repository branch name, and the merge request ID as input from the merge request, respectively. The GIT_REPO_URL and GIT_REPO_BRANCH variables are automatically set in the Repository and Branch fields of the Git tab.

When a job in a pipeline runs, there is no way to enter or change the default values of the parameters. Job parameters in pipelines exhibit the following implicit behaviors:
  • Upstream job parameters are passed to downstream jobs.

    For example, in a pipeline that flows from Job A to Job B to Job C, if parameter P1 is defined in Job A and parameter P2 is defined in Job B, then parameter P1 will be passed to Job B and parameters P1 and P2 will be passed to Job C.

  • An upstream job with the same named parameter as a downstream job will overwrite the default value of the named parameter from the downstream job.

    For example, if parameters P1 and P2 are defined in Job A and parameters P2 and P3 are defined in Job B, then the value of parameter P2 from Job A will overwrite the default value of parameter P2 in Job B. If there was a Job C downstream from Job B, then the initial default value of P2 (from Job A) plus the values of P1 and P3 would be passed to Job C.

  • When a build of the pipeline runs, the Configure Parameter dialog box displays all parameters of the jobs in the pipeline. Duplicate parameters are displayed once and its value is used by all jobs that use the parameter. The default value of a duplicate parameter comes from the first job in the pipeline where it is defined.

    For example, in a pipeline that flows from Job A to Job B to Job C, if parameter P1 is defined in Job A; parameters P2 and P3 are defined in Job 2; and parameters P1 and P4 are defined in Job C; then when the pipeline runs, it displays parameters P1, P2, P3, and P4 once in the Configure Parameter dialog box though parameter P1 is defined in two jobs. The default value of P1 would come from Job 1 and is passed to subsequent jobs of the pipeline.

    In the pipeline, if the Auto start when pipeline jobs are built externally is selected, then the Configure Parameter dialog box isn't displayed when a build of a pipeline's job runs. In the pipeline, the subsequent jobs of the job that trigger the build use the default values of their parameters. If a parameter is duplicate, then the job uses the default value of the first job where the parameter was defined.

    For example, in a pipeline that flows from Job A to Job B to Job C, if parameter P1 is defined in Job A; parameters P2 and P3 are defined in Job B; and parameters P1 and P4 are defined in Job C; then when a build of Job A runs, it passes the default value of P1 to Job B and Job C, and overwrites the default of P1 in Job C. If a build of Job B runs, then the builds use the default values of P2, P3, P1 (defined in Job C) and P4.

To learn about how to use build parameters in a Shell build step, see the GNU documentation on Shell Parameter Expansion at https://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html.

Reserved Words that Shouldn't Be Used in Build Parameter Names

A system environment variable shouldn't be used as a parameter name. If you use one of the following system environment variable names, the build might run incorrectly or even fail unexpectedly:

  • home
  • hostname
  • lang
  • mail
  • path
  • pwd
  • shell
  • term
  • user
  • username

In addition, you should avoid using the following environment variables, listed alphabetically, that may be used elsewhere, to avoid interfering with the plugin or the process that introduced them:

  • _ (underscore)
  • ant_home
  • build_dir, build_id, build_number
  • cvs_rsh
  • dcspassbuildinfofeaturecurrentorg, dcspassbuildinfofeaturecurrentproject
  • g_broken_filenames, git_repo_branch, git_repo_url, gradle_home
  • histcontrol, histsize, http_proxy, http_proxy_host, http_proxy_port, https_proxy, https_proxy_host, https_proxy_port
  • isdcspassbuildinfofeatureenabled
  • java_home, java_vendor, javacloud_home, javacloud_home_11_1_1_7_1, javacloud_home_11g, javacloud_home_soa, javacloud_home_soa_12_2_1, job_name
  • lessopen, logname
  • m2_home, merge_req_id, middleware_home, middleware_home_11_1_1_7_1, middleware_home_11g, middleware_home_soa, middleware_home_soa_12_2_1
  • no_proxy, no_proxy_alt, node_path
  • oracle_home, oracle_home_11_1_1_7_1, oracle_home_11g, oracle_home_soa, oracle_home_soa_12_2_1
  • qtdir, qtinc, qtlib
  • shlvl, ssh_askpass
  • tool_path
  • wls_home, wls_home_11_1_1_7_1, wls_home_11g, wls_home_soa, wls_home_soa_12_2_1, workspace
Access an Oracle Cloud Service Using SSH

You can configure a job to use SSH to access any Oracle Cloud service instances that has SSH enabled, such as Oracle Cloud Infrastructure Compute Classic VMs.

You can configure the job to use any of the following options, or both:
  • Create an SSH tunnel to access a process running on a remote system, including an on-premise system, via the SSH port. The SSH tunnel is created at the start of the build job and is destroyed automatically when the job finishes.

  • Set up the default ~/.ssh directory with the provided keys in the build’s workspace for use with the command-line tools. The modifications revert when the job finishes.

To connect to the Oracle Cloud service instance, you need IP address of the server, credentials of a user who can connect to the service instance, and local and remote port numbers:

  1. Open the job’s configuration page.
  2. Click Configure the Tools icon, if necessary.
  3. Click the Before Build tab.
  4. Click Add Before Build Action and select SSH Configuration.
  5. In Private Key and Public Key, enter the private and the public key of your SSH Private-Public key pair.
    Leave the Public Key empty to use fingerprint.
  6. In Pass Phrase, enter the passphrase of your SSH Private-Public key pair. Leave it empty if the keys aren’t encrypted with a passphrase.

    Note:

    If you want to access the Oracle Cloud service using a command or a Shell script from the UNIX Shell step, do not use a key protected by a passphrase, or SSH will interactively prompt for a passphrase during the build.

  7. In SSH Server Public Key, enter the public key of the SSH server.

    If you’re using a command-line SSH tool, note that the host name and the IP address must match. The host name and the IP address can be comma separated. Example: ssh1.example.com,10.0.0.13 ssh-rss ... .

    Leave the field empty to skip host verification. For command-line tools, such as ssh, add the -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null option explicitly to skip host verification.

  8. To use an SSH tunnel, select the Create SSH Tunnel check box.

    SSH tunnel provides an additional layer of security and can only be set up between trusted hosts. After you select the check box, enter the SSH server details:

    • Username: Name of the user who can connect to the SSH server.

    • Password: Password of the SSH user. Leave the field empty to use the key based authentication.

    • Local Port: Port number of the client used for local port forwarding.

    • Remote Host: Name of the remote host, or an interface on the SSH server.

    • Remote Port: Port number of the remote host or interface.

    • SSH Server: Name or IP address of the target SSH server.

    • Connect String: Displays the connect string to be used to set up the SSH tunnel.

  9. To use command line tools (such as ssh, scp, or sftp), select the Setup files in ~/.ssh for command-line ssh tools check box.
    When a build runs, necessary files with the information that you’ve provided are created for you in the known_hosts file of the ~/.ssh directory in the build system workspace. The files are removed automatically after the build is complete.
  10. Click Save.
Access the Oracle Maven Repository

The Oracle Maven Repository contains artifacts, such as ADF libraries, provided by Oracle. You may require these artifacts to compile, test, package, perform integration testing, or deploy your applications. For more information about the Oracle Maven repository, see https://maven.oracle.com/doc.html.

To build your applications and access the Oracle Maven Repository, you configure the job and provide your credentials to access the repository:

  1. Open https://www.oracle.com/webapps/maven/register/license.html in your web browser, sign-in with your Oracle Account credentials, and accept the license agreement.

  2. Configure the POM file and add the Oracle Maven Repository details:

    1. Add a <repository> element to refer to https://maven.oracle.com:

      <repositories>
        <repository>
          <name>OracleMaven</name>
          <id>maven.oracle.com</id>
          <url>https://maven.oracle.com</url>
        </repository>
      </repositories>
      

      Depending on your application, you may also want to add the <pluginRepository> element to refer to https://maven.oracle.com:

      <pluginRepositories>
        <pluginRepository>
          <name>OracleMaven</name>
          <id>maven.oracle.com</id>
          <url>https://maven.oracle.com</url>
        </pluginRepository>
      </pluginRepositories>
    2. Commit the POM file to the project Git repository.

  3. If you’re a project owner, set up Oracle Maven Repository connections for the project’s team members.

  4. Create and configure a job to access Oracle Maven Repository.

Create and Manage Oracle Maven Repository Connections

If your project users access the Oracle Maven Repository frequently, you can create a pre-defined connection for them. Project users can then configure a job and use the connection to access the artifacts of the Oracle Maven Repository while running builds.

To create a connection, you’d need the Oracle Technology Network (OTN) Single Sign-On (SSO) credentials of a user who has accepted the Oracle Maven Repository license agreement.

You must be a project Owner to add and manage Oracle Maven Repository connections:

Action How To

Add an Oracle Maven Repository connection

  1. In the navigation menu, click Project Administration Gear.

  2. Click Build.

  3. Click the Maven Connection tab.

  4. Click Add Maven Connection.

  5. In the Create Maven Connection dialog box, in Connection Name, enter a unique name.

  6. In OTN Username and OTN Password, enter the credentials of a user who has accepted the Oracle Maven Repository license agreement.

  7. In Server Id, if necessary, enter the ID to use for the <server> element of the Maven settings.xml file or use the default maven.oracle.com ID.

  8. Click Create.

Edit a connection and change the connection’s user credentials or provide another server ID

  1. In the navigation menu, click Project Administration Gear.

  2. Click Build.

  3. Click the Maven Connection tab

  4. Click the connection name and then click the Edit icon.

  5. In the Edit Maven Connection dialog box, if necessary, enter the credentials of a user with valid SSO user name.

    In Server Id, if necessary, enter the ID to use for the <server> element of the Maven settings.xml file. If not provided, the ID defaults to maven.oracle.com.

  6. Click Update.

Delete the connection

  1. In the navigation menu, click Project Administration Gear.

  2. Click Build.

  3. Click the Maven Connection tab

  4. Click the connection name and then click Delete.

  5. In the Delete Maven Connection dialog box, click Delete.

Configure a Job to Connect to the Oracle Maven Repository

You can set up a job that connects to the Oracle Maven Repository using a predefined connection:

  1. Open the job’s configuration page.
  2. Click Configure the Tools icon.
  3. Click the Before Build tab.
  4. Click Add Before Build Action and select Oracle Maven Repository Connection.
  5. From Use Existing Connection, select a pre-defined connection. Your project owner has created a connection so that you don't have to worry about setting it up.

    If there’s no pre-defined connection available or you want set up your own connection, click the toggle button. In OTN Username and OTN Password, enter the credentials of a user who has accepted the Oracle Maven Repository license agreement.

  6. In Server Id, if required, enter the ID to use for the <server> element of the Maven settings.xml file, or use the default maven.oracle.com ID.
  7. If you’re using a custom settings.xml file, in Custom settings.xml, enter the file’s path.
  8. Click Save.
Generate a Dependency Vulnerability Analysis Report

You can configure a job to generate a Dependency Vulnerability Analysis (DVA) report for a Maven, Node.js/Javascript, or Gradle application. This report can help you analyze any publicly known vulnerabilities in the application's dependencies.

When a build runs, VB Studio scans the job's POM file (Maven), package.json file (Node.js/Javascript), or build.gradle file (Gradle) and checks the direct and transitive dependencies against the National Vulnerability Database (NVD). See https://nvd.nist.gov/ to find more about NVD.

Dependencies in Node.js and Javascript projects are also checked for vulnerabilities against the following sources:
  • NPM: Data may be retrieved from the NPM Public Advisories, https://www.npmjs.com/advisories.
  • RetireJS: Data may be retrieved from the RetireJS community, https://retire.js/github.io/retire.js/.
  • Sonatype OSS Index: Data may be retrieved from the Sonatype OSS Index. https://sonatype.ossindex.org.

For any vulnerabilities found, you can configure the job to mark the build as failed or file an issue. If email notifications have been enabled or if a Slack webhook has been configured, you can be notified about these actions through email or Slack.

To configure a job to scan for security vulnerabilities:

  1. Open the job’s configuration page.
  2. Click the Before Build tab.
  3. From Add Before Build Action, select Dependency Vulnerability Analyzer.

    After adding the Dependency Vulnerability Analyzer build action, make sure it's enabled. You can disable the DVA report generation by disabling the build action.

  4. In Threshold at or above, select the score threshold.

    The scores capture the principal characteristics of a vulnerability and reflect its severity.

    Note:

    The threshold and confidence settings have different mappings and values, depending on the type of project (Node.js/Javascript, Maven, or Gradle). The Common Vulnerability Scoring System (CVSS) score is for vulnerabilities from the NVD database only. Vulnerabilities in Node.js and Javascript projects can come from sources (NPM, RetireJS, Sonatype OSS Index) in addition to those that come from NVD.

    For more information about how CVSS scores are calculated, see https://nvd.nist.gov/vuln-metrics/cvss.

    This table explains how the levels (Low, Medium, High) are defined for each vulnerability source. The Analyzer reports vulnerabilities when the value for the level you choose is met or exceeded.
    Source Project Low Medium High
    NVD Maven, NodeJS, Gradle Score range 0.0-3.9 Score range 4.0-6.9 Score range 7.0-10.0
    NPM NodeJS Low Moderate High or Critical
    RetireJS NodeJS Low Medium High or Critical
    Sonatype OSS Index NodeJS Score range 0.0-3.9 Score range 4.0-6.9 Score range 7.0-10.0

    For example, if you select Medium, any vulnerability with a CVSS score of 4.0 or above (and a Moderate NPM level, a Medium RetireJS level, or a Sonatype OSS Index score greater than 4.0 for a Node.js project) is detected and reported.

  5. In CPE Confidence, select the confidence rating the DVA has for the identified Common Platform Enumeration (CPE).

    CPE is a structured naming scheme for describing and identifying classes of applications, operating systems, and hardware devices present among an enterprise's computing assets. To find more about CPE, see https://csrc.nist.gov/projects/security-content-automation-protocol/specifications/cpe/.

    The CPE Confidence rating helps you filter the Common Vulnerabilities and Exposure (CVE) identifiers based on the confidence level. CVE is a list of common identifiers for publicly known cybersecurity vulnerabilities. To find more about CVE, see https://cve.mitre.org/.

    For example, select Medium to filter out the low confidence CVE identifiers from the report.

    Note:

    CPE confidence levels are not supported for NPM, RetireJS, and Sonatype OSS Index sources. For these sources, the Analyzer reports all vulnerabilities, regardless of the level you choose.
  6. To fail the build if a vulnerability is detected, select the Fail Build check box.
  7. To automatically file an issue for every build file where a vulnerability is detected, select the Create issue for every affected build file check box.
    In Product and Component, select the issue's product and component.
  8. Click the Steps tab.
  9. Add a Unix Shell step (or appropriate step, such as a Maven step to build the POM file).
    For example, if you add a UNIX Shell step, enter the following command to build the pom.xml file in the app_dir directory in the job's Git repository:

    mvn -install -fae -f app_dir/pom.xml -X

  10. Click Save.
  11. Run a build of the job.
    To trigger the build manually, in the Job Details page, click Build Now.
  12. After the build is complete and a vulnerability is detected, click Vulnerabilities Vulnerabilities icon to view the vulnerabilities report. If no vulnerabilities were detected, Vulnerabilities will be disabled.
  13. On the Dependency Analyzer Summary page, review the affected files, dependencies, and detected vulnerabilities.
    Expand the Report section to view the files of your application where vulnerabilities are found (in the POM file, in this example):
    Vulnerability report
After the DVA report is generated, expand each file in the Report section to view these details:
  • Issue ID, if the Create issue for every affected file check box was selected. Click the issue link to open it.

    You can also open the Project Home page and check the recent activities feed about the issue's create notification. You should see a message that an issue was created, such as System created Defect 2: Vulnerabilities in -MavenJavaApp. If an issue was previously created for the vulnerability, a comment will be added to the issue and a message like System commented Defect 2: Vulnerabilities in - MavenJavaApp will be added to the activities feed.

  • Merge request ID, if the Resolve button was clicked to resolve the vulnerabilities. Click the merge request link to open it.
  • Number of vulnerabilities
  • Name of each dependency where a vulnerability is found
  • Each dependency's type (direct or transitive)

    A transitive dependency displays a Transitive label next to the name. A direct dependency displays no label.

  • Number of alerts and alert categories of vulnerabilities (High, Medium, or Low)
  • Expand each dependency to view its vulnerabilities

    To mute a vulnerability's alerts, expand the vulnerability in the Report section, and click Mute in Alerts. In the Mute Vulnerability dialog box, review the details, and click Mute. The muted vulnerability won't be reported during the next run and it will not cause the build to fail. It will simply be included in the report as a muted vulnerability that should be used only for reference or to be unmuted and dealt with at some future time.

    Muted vulnerabilities will only show up in a report for the latest build, not in reports for any previous builds.

To fix a reported vulnerability, use Resolve and the dropdown menu in the analysis tool to change the dependency's version to one that doesn't have the vulnerability.

Resolve Reported Vulnerabilities Automatically

After the Dependency Vulnerability Analysis (DVA) report for the Maven, Node.js, Javascript, or Gradle application has been generated, review the report to identify the vulnerabilities in the flagged files, and click the Resolve button to resolve them.

The Resolve button simplifies and automates the process for resolving vulnerabilities found in the direct as well as transitive dependencies of the application's build file. The Resolve button isn't available in the DVA reports of older builds of the job. It is only available in the latest build of the job. The Resolve button is also disabled if a package.json file in a Node.js or Javascript application has vulnerabilities in transitive dependencies only. Transitive dependencies in Node.js and Javascript applications must be resolved manually, by editing the direct dependencies in the package.json file and rerunning the analyzer.

Click the Resolve button to resolve any direct and transitive dependencies that were found:

  1. In the Report section of the vulnerability analysis report, expand the affected build file (POM is shown):
    Vulnerability report
  2. Click Resolve.

    If a merge request exists, you can cancel the dialog and use it or continue to create another merge request.

  3. In the Resolve Vulnerability dialog box, review the reported vulnerabilities.
  4. If an issue was created when the report was generated, its ID is displayed. If no issue was created, select the Create issue to track this resolution check box to create it.

    In Linked Builds, add an existing build to link it to the merge request.

    In Reviewers, add team members to review the merge request:

    Resolve Vulnerability dialog box
  5. For each vulnerability, in Available Versions, select a version of the direct dependency or dependency with transitive dependencies that doesn't have the reported vulnerability.

    If you don't want to resolve the dependency or no versions are available, select Do Not Resolve.

  6. Click Create New Merge Request.

    When you click the button, VB Studio does the following:

    1. Creates a merge request with details about the vulnerabilities found.
    2. Creates a branch with the job's Git repository branch as the base branch, and then sets it as the review branch of the merge request.
    3. Sets the job's Git repository branch as the target branch of the merge request.
    4. Updates the review branch's application build file to use the specified versions of the dependencies.

    For example, if the job that generated the vulnerability report uses the JavaMavenApp Git repository and its release1.1 branch, then a new branch is created in JavaMavenApp using release1.1 as the base branch and is used as the review branch of the merge request. The release1.1 branch is used as the target branch.

    If a merge request with same review and target branches was created in an older build of the job, VB Studio uses the same merge request to merge the application build file updates.

  7. Click the merge request link to open it in another tab or window of the browser, and click OK.
  8. In the Merge Request, review the details of the vulnerabilities in the Conversation tab and the application build file changes (POM is shown) in the Changed Files tab:
    The Changed Files tab comparing the application build file (POM) to the review and the target branch
  9. If you've invited other reviewers, wait for their feedback.
  10. If you've linked a build job to the merge request, in the Linked Builds tab, run a build and verify its stability.
  11. When you're ready to merge the application build file updates, click Merge.
  12. In the Merge dialog box, to delete the review branch, select the Delete branch check box. To resolve linked issues, select the Resolve linked issues check box and the check boxes of issues you want to resolve.
  13. Click Create a merge request.
  14. Run a build of the job that reported dependency vulnerabilities and verify that the application build file's update has fixed the vulnerability.

    If a vulnerability is still found, repeat the preceding steps to create another merge request after selecting a different dependency version.

Run UNIX Shell Commands

You can configure a job to run a shell script or commands when a build runs:

  1. Open the job’s configuration page.

  2. Click Configure the Tools icon.

  3. Click the Steps tab.

  4. From Add Step, select Unix Shell.

  5. In Script, enter the shell script or commands.

    The script runs with the workspace as the current directory. If there is no header line, such as #!/bin/sh specified in the shell script, then the system shell is used. You can also use the header line to write a script in another language, such as Perl (#!/bin/perl) or control the options that shell uses.

    You can also use Kubernetes, PSMcli, Docker, Terraform, Packer, and OCIcli commands in the Shell script. Make sure that you have the required software in the job’s Build VM template before you run a build.

  6. To show the values of the variables and hide the input-output redirection in the build log, select the (-x) Expand variables in commands, don’t show I/O redirection option.

    To show the command as-it-is in the build log, select the (-v) Show commands exactly as written option.

  7. Click Save.

Tip:

  • By default, when a build runs, it invokes the shell with the -ex option. It prints all commands before they run. The build fails if any command exits with a non-zero exit code. You can change this behavior by adding the #!/bin/... line in the shell script.

  • If you have a long script, create a script file, add it to the Git repository, and then run the script using a command, such as bash -ex /myfolder/myscript.sh.

  • To run Python 3, create an isolated environment using virtualenv. See https://virtualenv.pypa.io/en/stable/.

    For example, to create a virtual environment, add these commands as a Shell build step:

    pip3 list
    cd $WORKSPACE
    python3 -m venv mytest
    cd mytest/bin
    ./pip3 list
    ./pip3 install --upgrade pip requests setuptools selenium
    ./pip3 list
    ./python3 -c 'import requests; r=requests.get('\''https://www.google.com'\''); print(r.status_code)'
    ./pip3 uninstall -y requests
    ./pip3 list 
  • If both Python 2 and Python 3 are available in the job’s Build VM template, to call Python, use these commands:

    Command Version

    python

    python2

    Python 2

    python3

    Python 3

    pip

    pip of Python 3

    pip3

    pip of Python 3

  • To clone an external Git repository using a shell command, use the internal URL of the external Git repository. To copy the URL, open the Git page and, from the Repositories drop-down list, select the external Git repository. From the Clone menu, click Copy to clipboard the copy to clipboard icon of the Clone with HTTPS from internal address URL:

    the Clone menu of an external Git repository

    If you’re using an Oracle Linux 6 VM in your job, remove the username from the URL before using it in a shell command. For example, if the copied URL is https://alex.admin@developer.us2.oraclecloud.com/mydomain-usoracle22222/s/developer1111-usoracle22222_myproject/scm/myextrepo.git, then remove alex.admin@ from the URL and use https://developer.us.oraclecloud.com/mydomain-usoracle22222/s/mydomain-usoracle22222_myproject/scm/mydomain-usoracle22222_myproject.git in your shell command.

Build Maven Applications

Using Apache Maven, you can automate your build process and download dependencies, as defined in the POM file:

  1. Upload the Maven POM files to the project Git repository.
  2. Open the job’s configuration page.
  3. Click Configure the Tools icon.
  4. In the Git tab, add the Git repository where you uploaded the build files.
  5. Click the Steps tab.
  6. From Add Step, select Maven.
  7. In Goals, enter Maven goals, or phases, along with their options. By default, clean and install goals are added.

    For more information about Maven goals, see the Maven Lifecycle Reference documentation at http://maven.apache.org.

  8. In POM file, enter the Maven POM file name and path, relative to the workspace root. The default value is pom.xml at the Git repository root.
  9. If necessary, specify the advanced Maven options:
    Action How To
    Use a private repository for builds Select the Use Private Repository check box.

    You may want to use it to make sure that other Maven build artifacts don’t interfere with the artifacts of this job’s builds. When a build runs, it creates a Maven repository .maven/repo directory in the build executor workspace. Remember selecting this option consumes more storage space of the workspace.

    Use a private temporary directory for builds. Select the Use Private Temp Directory check box.

    You may want to use it to create a temporary directory for artifacts or temporary files. When a build runs, it creates a .maven/tmp directory in the workspace. The temporary files may consume large amount of storage, so, remember to clean up the directory regularly.

    Work offline and don’t access remote Maven repositories Select the Offline check box.
    Activate Maven profiles In Profiles, enter a list of profiles, separated by commas.

    For more information about Maven profiles, see the Maven documentation at http://maven.apache.org.

    Set custom properties In Properties, enter custom system properties in the key=value format, specifying each property on its own line. When a build runs, the properties are passed to the build executor in the standard way (example: -Dkey1=value1 -Dkey2=value2)
    Set the Maven verbosity level From Verbosity, select the level.

    You may want to use it to set the verbosity of the Maven log output to the build log.

    Set the checksum mode From Checksum, select the mode.

    You may want to use it to set the check-sum validation strictness when the build downloads artifacts from the remote Maven repositories.

    Set handling of the SNAPSHOT artifacts From Snapshot, select the mode.
    Include other Maven projects to the reactor In Projects, enter the comma or space separated list of Maven project jobs to include in the reactor. The reactor is a mechanism in Maven that handles multi-module projects. A project job can be specified by [groupId]:artifactId or by its relative path.
    Resume a Maven project from the reactor In Resume From, enter the Maven job project name from where you would like to resume the reactor. The Maven job project can be specified by [groupId]:artifactId or by its relative path.
    Set the failure handling mode From Fail Mode, select the mode.

    You may want to use it to set how the Maven build proceeds in case of a failure.

    Set the Make-like reactor mode From Make Mode, select the mode. You may want to use it enable Make-like build behavior.
    Configure the reactor threading model In Threading, enter the value for experimental support for parallel builds. For example, a value of 3 indicates three threads for the build.
    Pass parameters to Java VM In JVM Options, enter the parameters. The build passes the parameters as MAVEN_OPTS.
  10. Click Save.
Use the WebLogic Maven Plugin

The WebLogic server includes a Maven plugin that you can use to perform various deployment operations against the server, such as deploy, redeploy, and update. The plugin is available in the VB Studio build executor. For more information about how to use the WebLogic Maven plugin, see Fusion Middleware Deploying Applications to Oracle WebLogic Server in Oracle Fusion Middleware Online Documentation Library.

When a build runs, the build executor creates an empty Maven repository in the workspace. To install the WebLogic plugin every time a build starts, in the job configuration, add a shell command to install the plugin and then deploy it:
  1. Open the job’s configuration page.

  2. Click Configure the Tools icon.

  3. Click the Steps tab.

  4. From Add Step, select Unix Shell.

  5. In Script, enter this command:

    mvn install:install-file -Dfile=$WLS_HOME/server/lib/weblogic-maven-plugin.jar -DpomFile=$WLS_HOME/server/lib/pom.xml
    mvn com.oracle.weblogic:weblogic-maven-plugin:deploy
  6. Click Save.

Upload to or Download Artifacts from the Project Maven Repository

To upload artifacts to the Maven repository, you'll use the distributionManagement snippet in the POM file. To download artifacts from the Maven repository, use the repositories snippet in the POM file:

  1. To upload a build artifact to the Maven repository, copy the distributionManagement snippet of the project’s Maven repository and add it to the POM file:
    1. In the navigation menu, click Maven Maven.
    2. On the right side of the page, click Browse.
    3. In the Artifact Details section, expand Distribution Management.
    4. In the Maven tab, click Copy Copy to clipboard to copy the <distributionManagement> code snippet to the clipboard.
    5. Open the POM file of your project in a code editor (or a text editor) and paste the contents of the clipboard under the <project> element:
  2. To download an artifact from the Maven repository, use the repositories snippet of the project’s Maven repository:
    1. In the navigation menu, click Maven Maven.
    2. On the right side of the page, click Browse.
    3. In the Artifact Details section, expand Distribution Management.
    4. In the Maven tab, copy the <repository> element of the Distribution Management to the clipboard.
    5. Open the POM file of your project in a code editor (or a text editor) and paste the <repository> element in the<repositories> element under <project>:
      <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
      	<modelVersion>4.0.0</modelVersion>
      	<groupId>com.example.employees</groupId>
      	<artifactId>employees-app</artifactId>
      	<packaging>war</packaging>
      	<version>0.0.1-SNAPSHOT</version>
      	<name>employees-app Maven Webapp</name>
      	<url>http://maven.apache.org</url>
      	
      	<repositories>
      	  <repository>
                 <id>Demo_repo</id>
                 <name>Demo Maven Repository</name>
                 <url>http://developer.us2.oraclecloud.com/profile/my-org/s/my-org_demo_12345/maven/</url>
               </repository>
      	</repositories>
      .
      .
      .
      </project>
  3. Save the file, commit it to the Git repository, and then push the commit.
  4. Configure the job to add a Maven step and add the required Maven goals:

    Description of devcs_build_job_mavengoals.png follows
    Description of the illustration devcs_build_job_mavengoals.png

    Tip:

    Use the deploy goal to upload Maven artifacts to the project’s Maven repository.

  5. Run a build of the job.
  6. If you configured the job to upload artifacts to the project’s Maven repository, after the build is successful, verify the artifacts in the Maven page:

You don't have to provide the credentials in settings.xml to access the project’s Maven repository when you run a build. Build jobs have full access to the project’s Maven repository for uploads and downloads.

Build Ant Applications

You can use Apache Ant to automate your build processes, as described in its build files.

  1. Upload the Ant build files (such as build.xml and build.properties) to the project Git repository.

  2. Open the job’s configuration page.

  3. Click Configure the Tools icon.

  4. In the Git tab, add the Git repository where you uploaded the build files.

  5. Click the Steps tab.

  6. From Add Step, select Ant.

  7. In Targets, specify the Ant targets or leave it empty to run the default Ant target specified in the build file.

  8. In Build File, specify the path of the build file.

  9. If necessary, in Properties, specify the values for properties used in the Ant build file:

    # comment
    name1=value1
    name2=$VAR2

    When a build runs, these values will be passed to Ant as -Dname1=value1 -Dname2=value2. You should always use $VAR for parameter references instead of using %VAR%). Use a double backslash (\\) to escape a backslash (\). Avoid using double-quotes ("). To define an empty property, use varname= in the script.

  10. If your build requires a custom ANT_OPTS, specify it in Java Options. You may use it to specify Java memory limits (example: -Xmx512m). Don’t specify other Ant options here (such as -lib), but specify them in Targets.

  11. Click Save.

For more information, see https://ant.apache.org/.

Build Gradle Applications

Using Gradle, you can automate your build processes as defined in its build script. For more information about Gradle, see https://gradle.org/.

In VB Studio, Gradle 5 is available. To use another version of Gradle, use Gradle Wrapper in the Gradle build step. Gradle recommends using Gradle Wrapper as the preferred way to run a Gradle build. To learn more about using Gradle Wrapper, see https://docs.gradle.org/current/userguide/gradle_wrapper.html.
Set Up a Build VM and a Build VM Template with Gradle
Before you can create a Build step that uses Gradle commands, your organization administrator must create a Build VM template that includes the Gradle software and add a Build VM that uses that Build VM template. The template can be created from scratch or software can be added to an existing template.

Note:

To find your organization administrator, click Contacts under your user profile. Your administrator, or a list of administrators, will display.

See Create and Manage Build VM Templates in Administering Visual Builder Studio.

After the organization administrator adds a Build VM to the Build VM template, you can create and configure a job to use that Build VM template and add Gradle commands.

Configure a Job to Run Gradle Commands

Create and configure a job that runs Gradle commands:

  1. Upload the build.gradle file to a project's Git repository.
  2. Open the job’s configuration page.
    If you're creating a job, in Software Template of the New Job dialog box, select the Gradle template. Jump to step 5.
  3. Click Settings the Gear icon.
  4. In the Software tab, select the Gradle template.
  5. Click Configure the Tools icon.
  6. In the Git tab, add the Git repository where you uploaded the build file.
  7. Click the Steps tab.
  8. From Add Step, select Gradle.
  9. To call the Gradle installation available on the build executor, select Use 'gradle' executable. To use Gradle wrapper, select Use 'gradlew' wrapper.

    If you selected Use 'gradlew' wrapper, deselect the Create 'gradlew' wrapper check box if you don't want to create a new Gradle wrapper when a build runs. If the check box isn't selected, make sure that the gradlew executable is in the $WORKSPACE directory. If the gradlew executable is in the root build script directory, select the In root build script directory check box.

    To use another version of Gradle, specify the version in Gradle version.

    Tip:

    To change the Gradle version when a build runs, add a build parameter and use it here. When a build runs, user can change the default version of Gradle and specify another version.
  10. In Tasks, enter Gradle tasks.
  11. In Build File, enter the name and path of the Gradle build.gradle file. This path must be relative to the root build script directory, if specified, else relative to the $WORKSPACE directory.
  12. In Root build script directory, enter the directory path that contains the top-level build.gradle file and serves as the project root. The path must be relative to the $WORKSPACE directory.
    If left empty, the path defaults to build.gradle in the root directory.
  13. In Switches, enter Gradle switches.
  14. If you’re using a build executor that is shared by other jobs or users, select the Force GRADLE_USER_HOME to use workspace check box to set GRADLE_USER_HOME to the workspace.
    By default, GRADLE_USER_HOME is set to $HOME/.gradle, so with this option you can avoid encountering unwanted changes in the default shared directory.
  15. Click Save.
Build Node.js Applications

Using Node.js, you can develop applications that run JavaScript on a server. For more information, see https://nodejs.org.

Set Up a Build VM and a Build VM Template with Node.js
Before you can create a Build step that uses Node.js, your organization administrator must create a Build VM template that includes the Node.js software and add a Build VM that uses that Build VM template. The template can be created from scratch or software can be added to an existing template.

Note:

To find your organization administrator, click Contacts under your user profile. Your administrator, or a list of administrators, will display.

See Create and Manage Build VM Templates in Administering Visual Builder Studio.

After the organization administrator adds a Build VM to the Build VM template, you can create and configure a job to use that Build VM template and add a Node.js script.

Configure a Job to Build a Node.js Application

Create and configure a job that builds a Node.js application:

  1. If you have a Node.js script, upload it to the project Git repository.
  2. Open the job’s configuration page.
    If you're creating a job, in Software Template of the New Job dialog box, select the Node.js template. Jump to step 5.
  3. Click Settings the Gear icon.
  4. In the Software tab, select the Node.js template.
  5. Click Configure the Tools icon.
  6. In the Git tab, add the Git repository where you uploaded the script file.
  7. Click the Steps tab.
  8. From Add Step, select Node.js.
  9. To specify the script file, in Source, select NodeJS File. In NodeJS File Path, specify the file path in the Git repository.

    To specify the script, in Source, select Script. In NodeJS Script, enter the script.

  10. To speed up build execution time, you can use a Unix Shell step to install NPM packages globally on your build VM(s), by running NPM commands with the --global option.
    Modules such as Gulp, Grunt, Bower, and Oracle DB Node package come preinstalled on a Compute VM. Not all modules are available across all versions of Node and these packages get out of date and are superseded by newer versions rather quickly. By using the --global option, you can install the NPM packages you need on a build VM and doing so will also make those packages available to subsequent builds that run on the same build VM. This results in a significant time saving over installing the same packages locally, which requires them to be reinstalled in every subsequent build.
    For more information, see Global vs. Local Installation.
  11. Click Save.
Access an Oracle Database Using SQLcl

Using SQLcl, you can run SQL statements from a build to connect and access an Oracle Database. You can use SQLcl to access any publicly available Oracle Database that you can connect to using a JDBC connect string. You can run DML, DDL, and SQL Plus statements. You can also use SQLcl in a test scenario and run SQL scripts to initialize seed data or validate database changes.

SQLcl requires Java SE 1.8 or later. To learn more about SQLcl, see http://www.oracle.com/technetwork/developer-tools/sqlcl/overview/index.html. Also see Using the help command in SQLcl in Using Oracle Database Exadata Express Cloud Service and the SQL Developer Command-Line Quick Reference documentation to know more about using SQLcl supported commands.

To connect to Oracle Database Exadata Express Cloud Service, download the ZIP file that contains its credentials and upload it to the job’s Git repository. You can download the ZIP file from the Oracle Database Cloud Service service console. See Downloading Client Credentials in Using Oracle Database Exadata Express Cloud Service.

Set Up a Build VM and a Build VM Template with SQLcl
Before you can create a Build step that uses SQLcl commands, your organization administrator must create a Build VM template that includes the SQLcl software and add a Build VM that uses that Build VM template. The template can be created from scratch or software can be added to an existing template.

Note:

To find your organization administrator, click Contacts under your user profile. Your administrator, or a list of administrators, will display.

See Create and Manage Build VM Templates in Administering Visual Builder Studio.

After the organization administrator adds a Build VM to the Build VM template, you can create and configure a job to use that Build VM template and add SQLcl commands.

Configure a Job to Run SQLcl Commands
Before you configure the job, note these points:
  • VB Studio doesn’t support SQL commands to edit buffer (such as set sqlformat csv) or edit console.
  • VB Studio doesn’t support build parameters in the SQL file.
  • If you are using Oracle REST Data Services (ORDS), some SQLcl commands, such as the BRIDGE command, requires a JDBC URL:

    BRIDGE table1 as "jdbc:oracle:thin:DEMO/demo@http://examplehost.com/ords/demo"(select * from DUAL);

  • To mark a build as failed if the SQL commands fail, add the WHENEVER SQLERROR EXIT 1 line to your script.
  1. Open the job’s configuration page.
    If you're creating a job, in Software Template of the New Job dialog box, select the SQLcl template. Jump to step 5.
  2. Click Settings the Gear icon.
  3. In the Software tab, select the SQLcl template.
  4. From the Java drop-down list, select version 1.8.x, or later.
  5. Click Configure the Tools icon.
  6. In the Git tab, add the Git repository where you uploaded the script file.
  7. Click the Steps tab.
  8. From Add Step, select SQLcl.
  9. In Username and Password, enter the user name and password of the Oracle Database account.

    You can also use build parameters in Username and Password.

  10. To connect to Oracle Database Exadata Express Cloud Service, in Credentials File, enter the workspace path of the uploaded credentials zip file.
  11. In Connect String, enter the JDBC or HTTP connection string of the Oracle Database account using any of the host_name:port:SID or host_name:port/service_name formats.

    Here's a JDBC example:

    test_server.oracle.com:1521:adt1100

    In this example, adt1100 is the SID, and ora11g is the service name in test_server.oracle.com:1521/ora11g.

    Here's an HTTP example:

    http://test_server.oracle.com:8085/ords/demo

    You can also use build parameters in Connect String.

  12. If the SQL statements are available in a file uploaded to the project Git repository, in Source, select SQL File. In SQL File Path, enter the Git repository path of the SQL file. You can copy the file’s path from the Git page.
    To enter SQL statements, in Source, select Inline SQL. In SQL Statements, enter the SQL statements. You can also use build parameters in SQL Statements.
  13. In Role, if necessary, select the database role of the user.
  14. In Restriction Level, if necessary, specify the restriction level on the type of SQL statements that are allowed to run.
  15. Click Save.

When a build runs, VB Studio stores your Oracle Database credentials in the Oracle Wallet. Check the build’s log for the SQL output or errors.

Run Oracle PaaS Service Manager Commands Using PSMcli

Using Oracle PaaS Service Manager command line interface (PSMcli) commands, you can create and manage the lifecycle of various services in Oracle Public Cloud. You can create service instances, start or stop instances, or remove instances when a build runs.

For more information about PSMcli and its commands, see About the PaaS Service Manager Command Line Interface in PaaS Service Manager Command Line Interface Reference.

Set Up a Build VM and a Build VM Template with PSMcli
Before you can create a Build step that uses PSMcli commands, your organization administrator must create a Build VM template that includes the PSMcli software and add a Build VM that uses that Build VM template. The template can be created from scratch or software can be added to an existing template.

Note:

To find your organization administrator, click Contacts under your user profile. Your administrator, or a list of administrators, will display.

See Create and Manage Build VM Templates in Administering Visual Builder Studio.

After the organization administrator adds a Build VM to the Build VM template, you can create and configure a job to use that Build VM template and add PSMcli commands.

Configure a Job to Run PSMcli Commands

Create and configure a job that runs PSMcli commands:

  1. Open the job’s configuration page.
    If you're creating a job, in Software Template of the New Job dialog box, select the PSMcli template. Jump to step 5.
  2. Click Settings the Gear icon.
  3. In the Software tab, select the PSMcli template.
  4. Click Configure the Tools icon.
  5. In the Git tab, add the Git repository where you uploaded the script file.
  6. Click the Steps tab.
  7. From Add Step, select PSMcli.
  8. In Username and Password, enter the user name and password of the Oracle Cloud account.
  9. In Identity Domain, enter the identity domain.
  10. In Region, select your identity domain’s region.
  11. In Output Format, select the preferred output format: JSON (default) or HTML.
  12. Scroll up and from Steps, select Unix Shell.
  13. In Script, enter the PSM commands on separate lines.
  14. Click Save.

You can add multiple shell steps to run different group of commands. Don’t add the PSMcli build step again.

Use OCIcli to Access Oracle Cloud Infrastructure Services

You can use Oracle Cloud Infrastructure command line interface (OCIcli) commands to create and manage Oracle Cloud Infrastructure objects and services when a build runs.

For more information about OCIcli and its commands, see the Oracle Cloud Infrastructure Command Line Interface documentation.
To configure the job, you'll need this information:
  • User OCID
  • Private key
  • Fingerprint of a user who can create and access the resources
  • Tenancy name
Contact the OCI administrator and get the required OCI input values. Get OCI input Values explains where these values can be found.
Set Up a Build VM and a Build VM Template with OCIcli
Before you can create a Build step that uses OCIcli commands, your organization administrator must create a Build VM template that includes the OCIcli software and add a Build VM that uses that Build VM template. The template can be created from scratch or software can be added to an existing template.

Note:

To find your organization administrator, click Contacts under your user profile. Your administrator, or a list of administrators, will display.

See Create and Manage Build VM Templates in Administering Visual Builder Studio.

After the organization administrator adds a Build VM to the Build VM template, you can create and configure a job to use that Build VM template and add OCIcli commands.

Configure a Job to Run OCIcli Commands

Create and configure a job that runs OCIcli commands:

  1. Open the job’s configuration page.
    If you're creating a job, select Software Template in the New Job dialog box and then select the PSMcli template. After creating the job, proceed to step 4.
  2. Click Settings the Gear icon.
  3. In the Software tab, select the OCIcli template.
  4. Click Configure the Tools icon.
  5. Click the Steps tab.
  6. From Add Step, select OCIcli.
  7. In User OCID, enter the OCID of the user who can access or create OCI resources.
  8. In Fingerprint, enter the public key fingerprint of the user.
  9. In Tenancy, enter the tenancy OCID.
  10. In Private Key, enter the private key of the user.
  11. In Region, select the Oracle Cloud Infrastructure tenancy’s region.
  12. Scroll up and from Add Step, select Unix Shell.
  13. In Script, enter the OCIcli commands on separate lines.
  14. Click Save.

Add multiple Unix Shell steps to run additional sets of commands. Don’t add another OCIcli build step.

Run Docker Commands

You can configure a job to run Docker commands on a Docker container when a build runs.

You should use the Docker container for short tests and builds. Don’t run a Docker container for long tests or builds, else the builds might not finish. For example, if you use a Docker image that’s listening on a certain port and behaves as a web server, the build won’t exit.

For more information about Docker commands, see https://docs.docker.com/.

Tip:

If you face a network issue when you run Docker commands, try adding the HTTP_PROXY and HTTPS_PROXY environment variables in the Docker file.
Set Up a Build VM and a Build VM Template with Docker
Before you can create a Build step that uses Docker commands, your organization administrator must create a Build VM template that includes the Docker software and add a Build VM that uses that Build VM template. The template can be created from scratch or software can be added to an existing template.

Note:

To find your organization administrator, click Contacts under your user profile. Your administrator, or a list of administrators, will display.

See Create and Manage Build VM Templates in Administering Visual Builder Studio.

After the organization administrator adds a Build VM to the Build VM template, you can create and configure a job to use that Build VM template and add Docker commands.

Configure a Job to Run Docker Commands

Create and configure a job that runs Docker commands:

  1. Open the job’s configuration page.
    If you're creating a job, in Software Template of the New Job dialog box, select the Docker template. Proceed to step 4.
  2. Click Settings the Gear icon.
  3. In the Software tab, select the Docker template.
  4. Click Configure the Tools icon.
  5. Click the Steps tab.
  6. From Add Step, select Docker, and then select the Docker command:
    Use this command ... To ...
    login

    Log in to the Docker registry.

    In Registry Host, select a pre-linked Docker registry, or enter the Docker registry’s host name where the images are stored. Leave it empty to use Docker Hub.

    In Username and Password, enter the credentials of the user who can access the Docker registry.

    build

    Build Docker images from a Dockerfile.

    Specify the registry host name, the Docker image name, its version tag, any Docker options, and the name and source of the Dockerfile. You can upload the Dockerfile in the Git repository and provide its path, add the Dockerfile code manually, or provide its URL if it’s available on an external source.

    To specify an external source, include the protocol. For example, include http) in the URL if you’re referencing a remote TAR file, such as http://55.555.555.555/me/mydocker.tar.gz. If you’re referencing a remote repository, ignore the protocol, as in git://github.com/me/my.git#mybranch:myfolder, for example.

    To learn more about Docker build command options, see https://docs.docker.com/engine/reference/commandline/build/.

    tag

    Create a target image tag that refers to the source image.

    Specify the registry host name, Docker image name, and its version tag name for the source and target images.

    push

    Push an image to the Docker registry.

    To learn more about push options, see https://docs.docker.com/engine/reference/commandline/push/.

    images

    List available images.

    To learn more about images, see https://docs.docker.com/engine/reference/commandline/images/.

    save

    Save an image to a .tar archive file.

    In Output File, specify the relative path and name of the output .tar file in the workspace.

    load

    Load an image from a .tar archive file.

    In Output File, specify the relative path and name of the output .tar file in the workspace.

    rmi

    Remove an image. You can remove new images, a specific image, or all images.

    To remove a specific image, enter the host name of the registry where the Docker images are stored. Remember that the images are stored in the registry if they are pushed there. Until the images are pushed, the Registry Host is used to form the fully qualified name of the Docker image on the computer where the image is being created.

    version

    View the version of Docker on the build executor.

  7. Click Save.

The Docker logout command runs automatically after all Docker commands have run.

Trigger a Wercker Pipeline

Wercker is a Docker-based platform that provides continuous integration and continuous delivery (CI/CD) capabilities.

To learn more about Wercker, see https://devcenter.wercker.com/.

Get the Wercker Account's Authentication Token

To trigger the Wercker pipeline from a job’s build, you need your Wercker account’s token:

  1. In a web browser, open https://app.wercker.com/sessions/new and log in using your Wercker or GitHub account.
  2. In the top-right corner of the page, click the user icon and select Settings.
  3. In the left navigation bar, click Personal Tokens.
  4. In Token Name, enter a name and click Generate.
    This is required if you create a new build.
  5. Copy the generated token value and save it somewhere safe.
    It is important that you save the generated token now because if you don't, you won’t be able to view or copy it later.
Configure a Job to Trigger the Wercker Pipeline

Note:

Support for the Wercker build step will be removed in an upcoming VB Studio release. Before that happens, you should configure any pipelines that currently use the Wercker build step to use VB Studio’s pipelines instead. See Design and Use Job Pipelines.

Create and configure a job that triggers the Wercker pipeline:

  1. Open the job’s configuration page.
  2. Click Configure the Tools icon.
  3. Click the Steps tab.
  4. From Add Step, select Wercker.
  5. In Token, paste the copied Wercker’s authentication token.
  6. Verify the values of Application, Pipeline, and Branch.
    The fields are automatically populated.
  7. In Message, enter a message to be passed to Wercker. When a build runs, the message will be displayed in Wercker's Runs tab.
  8. Click Save.

When a build of the job runs, it triggers the Wercker pipeline you specified.

Run Fn Commands

Fn, or Fn Project, is an open-source, container-native, serverless platform for building, deploying, and scaling functions in multi-cloud environments. To run Fn commands when a build runs, you must have access to a Docker container that has a running Fn server.

For more information about Fn, see https://fnproject.io/.

Set Up a Build VM and a Build VM Template with Fn
Before you can create a Build step that uses Fn commands, your organization administrator must create a Build VM template that includes the Fn software and add a Build VM that uses that Build VM template. The template can be created from scratch or software can be added to an existing template.

Note:

To find your organization administrator, click Contacts under your user profile. Your administrator, or a list of administrators, will display.

See Create and Manage Build VM Templates in Administering Visual Builder Studio.

After your organization administrator adds a Build VM to the Build VM template, you can create and configure a job to use that Build VM template and add Fn commands.

Configure a Job to Run Fn Commands

Create and configure a job that runs Fn commands:

  1. Open the job’s configuration page.
    If you're creating a job, in Software Template of the New Job dialog box, select the Fn template. Proceed to step 4.
  2. Click Settings the Gear icon.
  3. In the Software tab, select the Fn template.
  4. Click Configure the Tools icon.
  5. Click the Steps tab.
  6. From Add Step, select Fn, and then select the command:
    Use this option ... To ...
    Fn Version

    Log the version of the Fn CLI being used and the version of the Fn Server referenced by the current context, if available, in the build log.

    Fn Build

    Build a new function.

    Specify the relative path of the working directory to build the function, Fn build arguments, Docker registry host, and its user name. If you don’t want to use the Docker registry’s cache, deselect the Use Docker Cache check box. To display the command’s log in the build log, select the Verbose Output check box.

    Fn Push

    Push the image to the Docker registry.

    Specify the relative path of the working directory, Docker registry host, and its user name. To display the command’s log in the build log, select the Verbose Output check box.

    Fn Bump

    Bump the version of the func.yaml file.

    Specify the relative path of the working directory and the bump type (Major, Minor, or Patch). To display the command’s log in the build’s log, select the Verbose Output check box.

    Fn Deploy

    Deploy functions to the function server. Using the deploy command, you can bump, build, push and update a function.

    In Deploy to App, specify the Fn app name to deploy to. In other fields, specify the working directory, build arguments, Docker registry host, user name, API URL, and the Call URL. Select the desired check boxes, if necessary.

    Fn OCI

    Augments the OCI configuration provided by the OCIcli builder with three additional parameters that are needed for Oracle Functions (the Oracle version of the open source Fn server). These required OCI parameters are the Oracle Compartment ID, the provider, and the passphrase. The passphrase is the same one that is used in the OCIcli, although it isn't required in the OCIcli.

    See Oracle Functions Quick Start Guides for more information about these options.

  7. Click Save.
Use SonarQube

SonarQube is an open source quality management software that enables you to continuously analyze your application. When you configure a job to use SonarQube, the build generates an analysis summary that you can view from the job or the build details page.

To learn about SonarQube, see its documentation at https://docs.sonarqube.org.

Set Up SonarQube

You must be a Project Owner to add and manage SonarQube connections.

To create the connection, you'll need the URL of a SonarQube server that's available on the public internet.

To set up a SonarQube system for your project's users, create a pre-defined SonarQube connection that they can use:

Action How To

Add a SonarQube connection

  1. In the navigation menu, click Project Administration Gear.

  2. Click Build.

  3. Click the SonarQube Server tab.

  4. Click Add SonarQube Server Connection.

  5. In the Create SonarQube Server dialog box, enter a name for the server, provide the SonarQube server’s URL, and the credentials of a user who has access to the server.

  6. Click Create.

Edit a connection to change the user credentials or provide another server ID

  1. In the navigation menu, click Project Administration Gear.

  2. Click Build.

  3. Click the SonarQube Server tab

  4. Click the connection name and then click the Edit icon.

  5. In the Edit SonarQube Server dialog box, as necessary, update the SonarQube server’s URL and the credentials of a user who can access the server.

  6. Click Update.

Delete the connection

  1. In the navigation menu, click Project Administration Gear.

  2. Click Build.

  3. Click the SonarQube Server tab

  4. Click the connection name and then click Delete.

  5. In the Delete SonarQube Server dialog, click Delete.

Configure a Job to Connect to SonarQube

You can configure a job to use SonarQube from the Before Build tab and then add a post-build action to publish its reports:

  1. Open the job’s configuration page.
  2. Click the Before Build tab.
  3. From Add Before Build Action, select SonarQube Settings.
  4. From Sonar Server, select the pre-configured SonarQube server.

    The Username, Password, and SonarQube Server URL display the selected user's details. To add a server, contact the organization administrator.

  5. To provide the SonarQube project name and the SonarQube project key, expand Advanced SonarQube Settings, and update the values. Make sure that the SonarQube project key is unique.

    By default, the project key is set to <organization>_<projectname>.<jobname>and the project name is set to <projectname>.<jobname>.

  6. Click the After Build tab.
  7. From Add After Build Action, select SonarQube Result Publisher.
  8. To use the SonarQube Quality Gate status as the build status, select the Apply SonarQube quality gate status as build status check box.
    If the SonarQube Quality Gate status is Passed, the build is marked as successful. If the SonarQube Quality Gate status is Failed, the build is marked as failed. To learn about SonarQube Quality Gates, see https://docs.sonarqube.org/display/SONAR/Quality+Gates.
  9. To create an archive file that contains the SonarQube analysis files, select the Archive Analysis Files check box.
  10. Click Save.

To view the SonarQube analysis summary after a build, from the job’s details page, click SonarQube Analysis Summary SonarQube Analysis Summary. The SonarQube Analysis Summary displays SonarQube server URL for the job and the analysis summary.

Use Named Passwords

A named password is a variable that users can use across a project's build job configurations. Named passwords can be used in any password field in the job configuration, such as external Git repositories, SQLcl, PSMcli, and Docker configurations.

When the password changes, change the value of the variable and the new password is applied to all jobs and configurations where the variable is used. Note that the named password is not an environment variable. To use a named password as an environment variable, create a Password build parameter and set it to use the named password.

Create a Named Password

You must be a Project Owner to create, edit, or delete a named password:

Action How To

Create a named password

  1. In the navigation menu, click Project Settings Gear.

  2. Click Build.

  3. Click the Named Passwords tab.

  4. Click + Create.

  5. In the Create New Named Password dialog box, in Name, enter a name for the variable. In Password, enter the password.

  6. Click Create.

After creating the named password, share its name with your project users.

Edit a named password

  1. In the navigation menu, click Project Settings Gear.

  2. Click Build.

  3. Click the Named Passwords tab

  4. Click the password name and then click Edit.

  5. In the Edit Named Password dialog box, update the password.

    You can't change the named password's name.

  6. Click Update.

Delete the named password

  1. In the navigation menu, click Project Settings Gear.

  2. Click Build.

  3. Click the Named Passwords tab

  4. Click the named password name and then click Delete.

  5. In the Delete Named Password dialog box, click Delete.

After deleting the named password, let your project users know that it's no longer available.

Configure a Job to Use a Named Password

Configure a job that uses a named password:

  1. Open the job’s configuration page.
  2. In the Password field of the component you want to configure, enter the named variable as #{password_name}.
    For example, if the name of the named password is my_password, enter #{my_password}.
  3. Click Save.
Publish JUnit Results

JUnit test reports provide useful information about test results, such as historical test result trends, failure tracking, and so on.

If you use JUnit to run your application's test scripts, you can configure your job to publish JUnit test reports:

  1. Upload your application with test script files to the Git repository.

  2. Open the job’s configuration page.

  3. Click the After Build tab.

  4. From Add After Build Action, select JUnit Publisher.

  5. In Include JUnit XMLs, specify the path and names of XML files to include. You can use wildcards to specify multiple files:
    1. If you’re using Ant, you could specify the path as **/build/test-reports/*.xml.
    2. If you’re using Maven, you could specify the path as target/surefire-reports/*.xml.

    If you use this pattern, make sure that you don’t include any non-report files.

  6. In Exclude JUnit XMLs, specify the path and names of XML report files to exclude. You can use wildcards to specify multiple files.

  7. To see and retain the standard output and errors in the build log, select the Retain long standard output/error check box.

    If you don’t select the check box, the build log is saved, but the build executor truncates it to save space. If you select the check box, every log message is saved, but this might increase memory consumption and can slow the performance of the build executor.

  8. To combine all test results into a single table of results, select the Organize test output by parent location check box.

    If you use multiple browsers, the build executor will categorize the results by browser.

  9. To mark the build as failed when JUnit tests fail, select the Fail the build on fail tests check box.

  10. To archive videos and image files, select the Archive Media Files check box.

  11. Click Save.

After a build runs, you can view its test results.

View Test Results

You can view the JUnit test results of a build from the Test Results page:

Action How To

View test results of the last build

  1. Open the job’s details page.

  2. Click Tests the Tests icon.

View test results of a particular build

  1. Open the job’s details page.

  2. In the Build History table, click the build number.

  3. Click Tests the Tests icon.

View test suite details

On the Test Results page, click the All Tests toggle button. From the Suite Name, click the suite name.

View details of a test

Open the test suite details page and click the test name.

To view details of a failed test, on the Test Results page, click the All Failed Tests toggle button, and then click the test name.

View test results history

On the Test Results page, click View Test Results History.

If you configure the job to archive videos and image files, click Show Show to download the test image and click Watch Watch to download the test video file.

The supported image formats are .png, .jpg, .gif, .tif, .tiff, .bmp, .ai, .psd, .svg, .img, .jpeg, .ico, .eps, and .ps.

The supported video formats are .mp4, .mov, .avi, .webm, .flv, .mpg, .gif, .wmv, .rm, .asf, .swf, .avchd, and .m4v.

Use the Xvfb Wrapper

Xvfb is an X server that implements the X11 display server protocol and can run on machines with no physical input devices or display.

Set Up a Build VM and a Build VM Template with Xvfb
Before you can use Xvfb in a Build step, your organization administrator must first create a Build VM template with the minimum required software and then add a Build VM that uses the VM template. Your organization administrator can create a new VM template or use any existing Oracle Linux 7 VM template. (Xvfb isn’t available on an Oracle Linux 6 VM template.)

Note:

To find your organization administrator, click Contacts under your user profile. Your administrator, or a list of administrators, will display.

See Create and Manage Build VM Templates in Administering Visual Builder Studio.

After the organization administrator adds a Build VM to the Build VM template, you can create and configure a job to use that Build VM template and Xvfb.

Configure a Job to Run Xvfb

Create and configure a job that runs Xvfb commands:

  1. Open the job’s configuration page.
    If you're creating a job, in Software Template of the New Job dialog box, select the Xvfb template. Proceed to step 4.
  2. Click Settings the Gear icon.
  3. In the Software tab, select the Xvfb template or any minimum required software template.
  4. Click Configure the Tools icon.
  5. Click the Before Build tab.
  6. From Add Before Build Action, select Xvfb Wrapper.
  7. In Display Number, specify the ordinal number of the display the Xvfb server is running on. The default value is 0. If left empty, a random number is chosen when the build runs.
  8. In Screen offset, specify the offset for display numbers. The default value is 0.
  9. In Screen Size (WxHxD), specify the resolution and color depth of the virtual frame buffer in the WxHxD format. The default value is 1024x758x24.
  10. In Additional options, specify additional Xvfb command line options, if necessary. The default options are -nolisten inet6 +extension RANDR -fp /usr/share/X11/fonts/misc.
  11. In Timeout in seconds, specify the timeout duration for the build to wait before returning control to the job. The default value is 0.
  12. If you don’t want to log the Xvfb output in the build log, deselect the Log Xvfb output check box. The check box is selected by default.
  13. If you don’t want to keep the Xvfb server running for post-build steps, deselect the Shutdown Xvfb with whole job, not just with the main build action check box. The check box is selected by default.
  14. Click Save.
Publish Javadoc

If your application source code files are configured to generate Javadoc, you can configure a job to publish Javadocs when a build runs:

  1. Open the job’s configuration page.
  2. Click the After Build tab.
  3. From Add After Build Action, select Javadoc Publisher.
  4. In Javadoc Directory, specify the workspace path where the build executor would publish the generated Javadoc. By default, the path is set to target/site/apidocs .
  5. To configure the build executor to retain Javadoc for each successful build, select the Retain Javadoc for each build check box.
    You may want to enable this otion if you have a need to browse Javadoc of older builds, but be cognizant that this practice will consume more disk space that not retaining those older Javadocs. By default, the check box isn’t selected.
  6. Click Save.
Archive Artifacts

Archived artifacts can be downloaded manually and then deployed. By default, build artifacts are kept as long as the build log is.

If you want a job's builds to archive artifacts, you can do so as an after build action:

  1. Open the job’s configuration page.

  2. Click Configure the Tools icon.

  3. Click the After Build tab.

  4. Click Add After Build Action and select Artifact Archiver.

  5. In Files to archive, enter a comma-separated list of files, including the path.

    Wildcards can be used, but don't use the full path, such as /data/cibuild/7bc4aa61-eb31-4414-b5b2-a089ab2a2a74/workspace/SQL/*.*. Instead, specify the shorter relative path, such as SQL/*.*.

  6. In Files to exclude, enter a comma-separated list of files, including the path, as described in the previous step.

    A file that matches the exclude pattern won’t be archived even if it matches the pattern specified in Files to archive.

  7. If your application is a Maven application and you want to archive Maven artifacts, select Archive Maven Artifacts.

    To archive the Maven POM file along with the Maven artifacts, select Include POM.xml.

  8. Click Save.

Discard Old Builds and Artifacts

To save storage space, you can configure a job to discard its old builds and artifacts:

  1. Open the job’s configuration page.

  2. Click Settings the Gear icon.

  3. Click the General tab, if necessary.

  4. If not selected, select Discard Old Builds.

  5. Configure the discard options.

  6. Click Save.

Old builds will be discarded after you save the job configuration and after a job has been built.

Copy Artifacts from Another Job

If your application depends on artifacts from another job, you can configure the job to copy those artifacts when a build is run:

  1. Open the job’s configuration page.

  2. Click Configure the Tools icon.

  3. Click the Before Build tab.

  4. Click Add Before Build Action and select Copy Artifacts.

  5. In From Job, select the job whose artifacts you want to copy.

  6. In Which Build, select the build that generated the artifacts.

  7. In Artifacts to copy, specify the artifacts to copy. When a build runs, the artifacts are copied with their relative paths.

    If you don't specify a value, the build will copy all artifacts. The archive.zip file is never copied.

  8. In Target Directory, specify the workspace directory where the artifacts will be copied.

  9. To flatten the directory structure of the copied artifacts, select Flatten Directories.

  10. By default, if a build can’t copy artifacts, it'll be marked as failed. If you don’t want the build to be marked as failed, select Optional (Do not fail build if artifacts copy failed).

  11. Click Save.

Configure General and Advanced Job Settings

You can configure several general and advanced job settings, such as name and description, the JDK version used in the build, discarding old and running concurrent builds, adding timestamps to the build log, and more:

Action How To

Update the job’s name and description

  1. Open the job’s configuration page.

  2. Click Settings the Gear icon.

  3. Click the General tab.

  4. In Name and Description, update the job name and description.

  5. Click Save.

Check the software available on the job’s Build VM template

  1. Open the job’s configuration page.

  2. Click Settings the Gear icon.

  3. Click the Software tab.

  4. See the software and their versions. You can change versions of some software, such as Java SE. Select the version from the drop-down list.

  5. Click Save.

Run concurrent builds

  1. Open the job’s configuration page.

  2. Click Settings the Gear icon.

  3. Click the General tab.

  4. Select the Execute concurrent builds if necessary check box.

    By default, only one build of a job runs at a time. The next build runs after the running build finishes.

  5. Click Save.

Set a quiet period

  1. Open the job’s configuration page.

  2. Click Settings the Gear icon.

  3. Click the Advanced tab.

  4. Select the Quiet period check box and specify the amount of time (in seconds) a new scheduled build of the job will wait before it runs.

    If the build executor is busy with too many builds, setting a longer quiet period can reduce the number of builds.

  5. Click Save.

Set a retry count

  1. Open the job’s configuration page.

  2. Click Settings the Gear icon.

  3. Click the Advanced tab.

  4. Select the Retry Count check box.

  5. In Build Retries specify the number of times the build executor tries the build. By default, the build executor tries five times to run a build that fails. You can increase or decrease the count.

    In SCM Retries specify the number of times the build executor tries the build to checkout files from the Git repository. You can increase or decrease the default count.

  6. Click Save.

Abort a build if it’s stuck for some duration

  1. Open the job’s configuration page.

  2. Click Settings the Gear icon.

  3. Click the Advanced tab.

  4. Select the Abort the build if it is stuck check box.

  5. In Hours and Minutes, specify the duration.

    If a build doesn’t complete in the specified amount of time, the build is terminated automatically and marked as aborted. Select the Fail the build on abort check box to mark the build as failed, rather than aborted.

  6. Click Save.

Remove timestamps from the build log

  1. Open the job’s configuration page.

  2. Click Settings the Gear icon.

  3. Click the Advanced tab.

  4. Deselect the Add Timestamps to the Console Output check box.

    By default, build logs are timestamped. This selection configures the job to remove them from the log.

  5. Click Save.

Set the maximum size of the console log

  1. Open the job’s configuration page.

  2. Click Settings the Gear icon.

  3. Click the Advanced tab.

  4. In Max Log Size (MB), set the size. The default value is 50 MB and the maximum value is 1000 MB.

  5. Click Save.

Manage Build Actions

You can manage build actions in job configurations, including disabling/enabling, reordering, or removing build actions. These operations apply to build actions on the Git, Parameters, Before Build, Steps, and After Build tabs (under Configure) and build actions on the Triggers tab (under Settings).

Action How To

Disable a build action

In any tab on the Job Configuration page, for any enabled build action, change the toggle from Enabled to Disabled and click Save.

Use this toggle to disable the build step or action temporarily. If a step or action is disabled, it'll be skipped when the job is run.

If you see a validation error while trying to save a job configuration after adding, then disabling, a new build action, make sure that you filled out all required fields. Required fields are still required, even though the build action is disabled. You must either fill out the required field(s) in the disabled build action or remove the build action before trying to resave the job configuration.

Enable a disabled build action

In any tab on the Job Configuration page, for any disabled build action, change the toggle from Disabled to Enabled and click Save.

Reorder build actions

In any tab on the Job Configuration page that has multiple build actions, drag and drop any build action to rearrange the order and click Save.

Remove a build action

In any tab on the Job Configuration page, for any enabled or disabled build action, click Remove Remove and click Save.

Change a Job's JDK Version

Change the JDK version used in a job:

  1. Open the job’s configuration page.
  2. Click Settings the Gear icon.
  3. Click the Software tab.
  4. In Available Software, from the Java drop-down list, change the JDK's version number.
    Instead of selecting the JDK, you could select GraalVM, a universal virtual machine for running applications written in JavaScript, Python, Ruby, R, JVM-based languages like Java, Scala, Groovy, Kotlin, Clojure, and LLVM-based languages such as C and C++. To learn more, see https://www.graalvm.org/docs/.
  5. Click Save.
Change a Job’s Build VM Template

Contact the organization administrator to create a Build VM template or install software to a template.

Note:

To find your organization administrator, click Contacts under your user profile. Your administrator, or a list of administrators, will display.

See Create and Manage Build VM Templates in Administering Visual Builder Studio.

You can change a job’s Build VM template after you create the job:

  1. Open the job’s configuration page.

  2. Click Settings the Gear icon.

  3. Click the Software tab.

  4. In Software Template, select the Build VM template that you want to use for your builds.

  5. Click Save.

Run a Build

You can run a job’s build manually or configure the job to trigger it automatically on an SCM commit or according to a schedule:

Action How To

Run a build manually

Open the job’s details page and click Build Now.

You can also run a job’s build from the Jobs Overview page. In the jobs table, click Build Now Build Now.

Run a build on SCM commit

See Trigger a Build Automatically on SCM Commit.

Run a build on a schedule

See Trigger a Build Automatically on a Schedule.

View a Job’s Builds and Reports

From the Builds page, click a job name to open its details page, from which you can view a job’s builds, reports, and build history, or perform actions such as running a build or configuring the job.

View a Build’s Logs and Reports

A build generates various types of reports and logs, such as SCM Changes, test results, and action history. You can open these reports from the Job Details page or the Build Details page. On the Job Details page or the Build Details page, click the report icon to view its details.

Here are the types of reports that are generated by a build:

Log/Report Description

Changes SCM Changes

View all files that have changed in the build.

When a build is triggered, the build system checks the job’s Git repositories for any changes to the SCM. If there are any updates, the SCM Change log displays the files that were added, edited or removed.

ArtifactsArtifacts

View the latest archived artifacts generated by the build.

Javadoc Javadoc

View the build's Javadoc output.

The report is available only if the application’s build generated Javadoc.

Tests Tests

View the log of build’s JUnit test results.

To open the Test Suite details page, on the Test Results page, click the All Tests toggle button and click the suite name in the Suite Name column.

To view details of a test, on the Test Results page, click the All Failed Tests toggle button and then click the test name link in the Test Name column. You can also click the All Tests toggle button, open the test suite details page, and then click the test name link in the Test Name column.

Build Log Console

View the last build’s log. In the log page, review the build log. If the log is displayed partially, click the Full Log link to view the entire log. To download the log as a text file, click the Download Console Output link.

Git Log SCM Poll Log

View the Git SCM polling log of the builds that displays the log of builds triggered by SCM polling. The log includes scheduled uilds and builds triggered by SCM updates.

In the Job Details page of a job, click Latest SCM Poll Log Latest SCM Poll Log to view the Git SCM polling log of the last build.

Audit Audit

View the Audit log of user actions.

You can use the Audit log to track the user actions on a build. Use the log to see who performed particular actions on the job. For example, you can see who canceled a build of the job, or who disabled the job and when was it disabled.

SonarQube SonarQube Analysis Summary

View the SonarQube analysis report of the job.

VulnerabilitiesVulnerabilities icon

View the Security Vulnerabilities report that identifies direct and transitive dependencies in the job's Maven, Node.js, Javascript, and/or Gradle projects.

View a Project’s Build History

The Recent Build History page displays builds of all jobs of the project.

To view the build history, in the Build Queue box of the Builds page, click the View Recent Build History link. The history page shows the last 50 builds of the project. Click a job name to open its details page. Click a build number to open its details page. Click Console to open the build’s console and view the console log output.

Tip:

To sort the table data by a column, right-click inside the build history table column and select the sort order from the Sort context menu.
View a Job’s Build History

A job’s build history can be viewed in the Build history section of the Job Details page. It displays the status of the running builds, and completed job builds in descending order (recent first) along with their build numbers, date and time, and a link to the console output of the build.

The build history shows how the build was triggered as well as its status, build number, and date-time stamp. The view also shows a Console Console icon for opening the build’s console and a Delete Delete icon for deleting the build.

When reviewing the build history, note these points:

  • In the By column, the icons indicate the following:

    This icon ... Indicates:
    User User The build was initiated by a user.
    SCM Change SCM Change The build was triggered by an SCM change.
    Pipeline Pipeline The build was initiated by a pipeline. Click to open the build’s pipeline instance.
    Periodic Build Trigger Periodic Build Trigger The build was triggered by a periodic build trigger.
    Build System Build System The build was started or rescheduled by the build system.
  • In the Build column, an * in the build number indicates the build is annotated with a description. Mouse over the build number to see the description.

  • The list doesn’t show discarded and deleted jobs.

  • If a running build remains stuck in the Queued state for a long time, you can mouse over the Queued status to display a message about the problem.

    If the build is using a Build VM, you can contact the organization administrator to check the VM’s status.

  • To sort the table data in ascending or descending order, click the header column name and then click the Previous or Next icon in the column header.

    You can also right-click inside table column and then select the sort order from the Sort context menu.

  • Only project members can delete builds. Non-members cannot.

View a Job’s User Action History

You can use the Audit log to track a job’s user actions. For example, you can see who cancelled a build of the job, or who disabled the job and when it was disabled.

To open the Audit log, from the job’s details page, click Audit Audit.

The log displays information about these user actions:

  • Who created the job

  • Who started a build or how a build was triggered (followed by the build number), when the build succeeded or failed, and the duration of the build

    A build can also be triggered by a timer, a commit to a Git repository, or an upstream job.

  • Who aborted a build

  • Who changed the configuration of the job

  • Who disabled a job

  • Who enabled a job

View a Build’s Details

A build’s details page shows its status, links to open build reports, download artifacts, and logs. To open the a build’s details page, click the build number in the Build History.

You can perform these common actions from a build’s details page:

Action How To

Keep a build forever

A build that’s marked as ‘forever’ isn’t removed if a job is configured to discard old builds automatically. You can’t delete it either.

To keep a build forever, click Configure, select the Keep Build Forever check box, and click Save.

Add a name and description to a build

Adding a description and a name is especially helpful if you mark a particular build to keep it forever and not get discarded automatically. When you add a description to a build, an * is added to the build number in the Build History table.

To keep a build forever, click Configure. In Name and Description, enter the details, and click Save.

Open a build’s log

Click Build Log.

Delete a build

Click Delete.

Download Build Artifacts

Build artifacts are displayed in a directory tree structure. You can click the link to download parts of the tree, including individual files, directories, and subdirectories.

If the job is configured to archive artifacts, you can download them to your computer and then deploy to your web server:

  1. Open the job’s details page.

  2. Click Artifacts.

    To download artifacts of a particular build, in the Build History, click the build number, and then click Artifacts.

  3. Expand the directory structure and click the artifact link (file or directory) to download it.

    To download a zip file of all artifacts, click All files in a zip.

  4. Save the file to your computer.

Watch a Job

You can subscribe to email notifications that you'll receive when a build of a job succeeds or fails.

To get email notifications, enable them in your user preferences, and then set up a watch on the job:

Action How To

Enable your email notifications preference

In your user preferences page, select the Build Activities check box.

Watch a job

  1. Open the job’s details page.

  2. Click the On toggle button, if necessary.

  3. Click CC Me.

  4. In the CC Me dialog box, to receive email when the build is successful, select the Successful Builds check box. Select Failed Builds to receive email when the build fails.

  5. Click OK.

Disable email notifications of the job to all subscribed members

  1. Open the job’s details page.

  2. Click the Off toggle button, if necessary.

Build Executor Environment Variables

When you run a build job, you can use the environment variables in your shell scripts and commands to access the software on the build executor.

To use a variable, use the $VARIABLE_NAME syntax, such as $BUILD_ID.

Common Variables

Here are some common environment variables:

Environment Variable Description

BUILD_ID

The current build’s ID.

BUILD_NUMBER

The current build number.

BUILD_URL

The full URL of the current build.

BUILD_DIR

The build output directory.

JOB_NAME

The name of the job.

EXECUTOR_NUMBER

The unique number that identifies the current executor (among executors of the same machine) that's running the current build.

HTTP_PROXY

The HTTP proxy for outgoing connections.

HTTP_PROXY_HOST

The HTTP proxy host for outgoing connections.

HTTP_PROXY_PORT

The HTTP proxy port for outgoing connections.

HTTPS_PROXY

The HTTPS proxy for outgoing connections.

HTTPS_PROXY_HOST

The HTTPS proxy host for outgoing connections.

HTTPS_PROXY_PORT

The HTTPS proxy port for outgoing connections.

JOB_NAME

The name of the current job.

JOB_URL

The full URL of the current job.

NO_PROXY

A comma separated list of domain names or IP addresses for which the proxy should not be used. You can also specify port numbers.

NO_PROXY_ALT

A pipe ( | ) separated list of domain names or IP addresses for which the proxy should not be used. You can also specify port numbers.

PATH

The PATH variable, set in the build executor, specifies the path of executables in the build executor.

Executables from the software bundles are available on the build executor's PATH variable, which is set to/usr/bin, and can be invoked directly from the Unix Shell. You should use the PATH variable and other environment variables to access the installed software.

See Software for Build VM Templates in Administering Visual Builder Studio for more information.

WORKSPACE The absolute path of the build executor's workspace.

Software Variables

Environment Variable Description

DYNAMO_HOME

The path of the Oracle ATG home directory.

DYNAMO_ROOT

The path of the Oracle ATG root directory.

GRADLE_HOME

The path of the Gradle directory.

JAVA_HOME

The path of the directory where the Java Development Kit (JDK) or the Java Runtime Environment (JRE) is installed.

If your job is configured to use a specific JDK, the build executor sets the variable to the path of the specified JDK. When the variable is set, PATH is also updated to have $JAVA_HOME/bin.

NODE_HOME

The path of the Node.js home directory.

NODE_PATH

The path of the Node.js modules directory.

To access SOA, use these variables:

  • Use JAVACLOUD_HOME variables to access the Java SDK
  • Use MIDDLEWARE_HOME variables to access Oracle Fusion Middleware. The MIDDLEWARE_HOME directory includes the WebLogic Server installation directory and the Oracle Common library dependencies.
  • Use WLS_HOME variables to access the WebLogic server binary directory

Make sure that you have the right software available in the Build VM Template of your job:

Software Variables
SOA 12.2.1.3

JAVACLOUD_HOME_SOA_12_2_1=/opt/Oracle/MiddlewareSOA_12.2.1.3.0/jdeveloper/cloud/oracle-javacloud-sdk/lib

JAVACLOUD_HOME_SOA=/opt/Oracle/MiddlewareSOA_12.2.1.3.0/jdeveloper/cloud/oracle-javacloud-sdk/lib

MIDDLEWARE_HOME_SOA_12_2_1=/opt/Oracle/MiddlewareSOA_12.2.1.3.0

MIDDLEWARE_HOME_SOA=/opt/Oracle/MiddlewareSOA_12.2.1.3.0

ORACLE_HOME_SOA_12_2_1=/opt/Oracle/MiddlewareSOA_12.2.1.3.0/jdeveloper

ORACLE_HOME_SOA=/opt/Oracle/MiddlewareSOA_12.2.1.3.0/jdeveloper

WLS_HOME_SOA_12_2_1=/opt/Oracle/MiddlewareSOA_12.2.1.3.0/wlserver

WLS_HOME_SOA=/opt/Oracle/MiddlewareSOA_12.2.1.3.0/wlserver

SOA 12.2.1.2

JAVACLOUD_HOME_SOA_12_2_1=/opt/Oracle/MiddlewareSOA_12.2.1.2/jdeveloper/cloud/oracle-javacloud-sdk/lib

JAVACLOUD_HOME_SOA=/opt/Oracle/MiddlewareSOA_12.2.1.2/jdeveloper/cloud/oracle-javacloud-sdk/lib

MIDDLEWARE_HOME_SOA_12_2_1=/opt/Oracle/MiddlewareSOA_12.2.1.2

MIDDLEWARE_HOME_SOA=/opt/Oracle/MiddlewareSOA_12.2.1.2

ORACLE_HOME_SOA_12_2_1=/opt/Oracle/MiddlewareSOA_12.2.1.2/jdeveloper

ORACLE_HOME_SOA=/opt/Oracle/MiddlewareSOA_12.2.1.2/jdeveloper

WLS_HOME_SOA_12_2_1=/opt/Oracle/MiddlewareSOA_12.2.1.2/wlserver

WLS_HOME_SOA=/opt/Oracle/MiddlewareSOA_12.2.1.2/wlserver

SOA 12.2.1.1

JAVACLOUD_HOME_SOA_12_2_1=/opt/Oracle/MiddlewareSOA_12.2.1.1/jdeveloper/cloud/oracle-javacloud-sdk/lib

JAVACLOUD_HOME_SOA=/opt/Oracle/MiddlewareSOA_12.2.1.1/jdeveloper/cloud/oracle-javacloud-sdk/lib

MIDDLEWARE_HOME_SOA_12_2_1=/opt/Oracle/MiddlewareSOA_12.2.1.1

MIDDLEWARE_HOME_SOA=/opt/Oracle/MiddlewareSOA_12.2.1.1

ORACLE_HOME_SOA_12_2_1=/opt/Oracle/MiddlewareSOA_12.2.1.1/jdeveloper

ORACLE_HOME_SOA=/opt/Oracle/MiddlewareSOA_12.2.1.1/jdeveloper

WLS_HOME_SOA_12_2_1=/opt/Oracle/MiddlewareSOA_12.2.1.1/wlserver

WLS_HOME_SOA=/opt/Oracle/MiddlewareSOA_12.2.1.1/wlserver

SOA 12.1.3

JAVACLOUD_HOME_12C3=/opt/Oracle/MiddlewareSOA_12.1.3/jdeveloper/cloud/oracle-javacloud-sdk/lib

JAVACLOUD_HOME_SOA_12_1_3=/opt/Oracle/MiddlewareSOA_12.1.3/jdeveloper/cloud/oracle-javacloud-sdk/lib

JAVACLOUD_HOME_SOA=/opt/Oracle/MiddlewareSOA_12.1.3/jdeveloper/cloud/oracle-javacloud-sdk/lib

MIDDLEWARE_HOME_12C3=/opt/Oracle/MiddlewareSOA_12.1.3

MIDDLEWARE_HOME_SOA_12_1_3=/opt/Oracle/MiddlewareSOA_12.1.3

MIDDLEWARE_HOME_SOA=/opt/Oracle/MiddlewareSOA_12.1.3

ORACLE_HOME_12C3=/opt/Oracle/MiddlewareSOA_12.1.3/jdeveloper

ORACLE_HOME_SOA_12_1_3=/opt/Oracle/MiddlewareSOA_12.1.3/jdeveloper

ORACLE_HOME_SOA=/opt/Oracle/MiddlewareSOA_12.1.3/jdeveloper

WLS_HOME_12C3=/opt/Oracle/MiddlewareSOA_12.1.3/wlserver

WLS_HOME_SOA=/opt/Oracle/MiddlewareSOA_12.1.3/wlserver

Tip:

  • You can run the env command as a Shell build step to view all environment variables of the build executor.

  • Some Linux programs, such as curl, only support lower-case environment variables. Change the build steps in your job configuration to use lower-case environment variables:

    export http_proxy="$HTTP_PROXY"
    export https_proxy="$HTTPS_PROXY"
    export no_proxy="$NO_PROXY"
    curl -v http://www.google.com

Design and Use Job Pipelines

You can create, manage, and configure job pipelines from the Pipelines tab of the Builds page.

What Is a Pipeline?

A Pipeline lets you define dependencies of jobs and create a path or a chain of builds. A pipeline helps you in running continuous integration jobs and reduce network traffic.

To create a pipeline, you design a pipeline diagram where you define the dependencies of jobs. When you create a dependency of a job over another, you define the order of automatic builds of the dependent jobs. If required, the dependent jobs can be configured to use artifacts of the parent job too.

For example, in this diagram, Job 2 depends on Job 1 and runs after Job 1 is successful.

In this diagram, Job 2, Job 3, and Job 4 depend on Job 1 and run after Job 1 is successful.

This diagram shows a complex example.

The above diagram defines these dependencies:

  • Job 2 and Job 3 depend on Job 1 and run after Job 1 is successful

  • Job 4 and Job 5 depend on Job 2 and run after Job 2 is successful

  • Job 6 and Job 7 depend on Job 4 and run after Job 4 is successful

  • Job 8 depends on Job 6 and Job 7 and runs after Job 6 and Job 7 are successful

  • Job 1 is the master job. Running Job 1 triggers a chain and all jobs from Job 1 through Job 8 run automatically one after the other.

You can create multiple pipeline diagrams of jobs. If multiple pipelines have some common jobs, then multiple builds run of those jobs. For example, in this figure, Pipeline 1 and Pipeline 2 have common jobs:

Let’s assume that Pipeline 1 is defined first and Pipeline 2 is defined second. If both pipelines are triggered, the builds run in this order:

  1. A build of Job 1 runs.

  2. Builds of Job 2 and Job 3 of Pipeline 1 get in the build executor queue after Job 1 is successful. A build of Job 2 of Pipeline 2 also gets in the build executor queue after Job 1 is successful.

  3. Builds of jobs in build executor queue run on first-come first-served basis. So, Job 2 and Job 3 of Pipeline 1 run first. Let’s call the build as Build 1 of Job 2 and Job 3. Then, another build of Job 2 of Pipeline 2 runs. Let’s call it Build 2 of Job 2.

  4. A build of Job 4 of Pipeline 1 joins the build executor queue as soon as Job 2 is successful. A build of Job 3 of Pipeline 2 also joins the queue when Job 2 is successful.

  5. As soon as the build executor is available, Build 1 of Job 4 runs and Build 2 of Job 3 also runs. Remember that Build 1 of Job 3 ran in Pipeline 1.

  6. After a build of Job 3 of Pipeline 2 is successful, a build of Job 4 of Pipeline 2 joins the queue and runs when the build executor is available. Remember that this is Build 2 of Job 4 as Build 1 ran in Pipeline 1.

While creating multiple pipeline diagrams with common jobs, be careful if a job is dependent on artifacts of the parent job.

Set Up a Pipeline

You configure a job pipeline from the Pipelines tab of the Builds page. To set up a pipeline, design a pipeline diagram.

Create a Pipeline

Here's how to create a pipeline:

  1. In the navigation menu, click Builds Builds.
  2. Click the Pipelines tab.

  3. Click + Create Pipeline.

  4. In the Create Pipeline dialog box, in Name and Description, enter a unique name and description.

  5. To trigger the pipeline build if a job of the pipeline is triggered externally (outside the pipeline), select the Auto start when pipeline jobs are build externally check box.

    In the pipeline, builds of jobs following the triggered job run as per the diagram, but builds of jobs preceding the triggered job don’t run.

  6. To disable manual or automatic builds of the jobs that are part of the pipeline when the pipeline is running, select the Disallow pipeline jobs to build externally when the pipeline is building check box.

  7. Click Create.

  8. In the Designing Pipeline page, design the pipeline, and click Save.

Use the Pipeline Designer

You use the pipeline designer to create a pipeline diagram, that defines dependencies between jobs and the order of their builds.

The Jobs list shows all jobs of the project on the left side of the page. Drag and drop jobs to the designer area to design the pipeline diagram. Click Configure Configure to configure the dependency condition between the parent and the child job.

Create a One-to-One Dependency

A one-to-one dependency is formed between a parent and a child job. When a build of the parent job is successful, a build of the child job runs automatically.

To create a one-to-one dependency of a child job to its parent job:

  1. From the Jobs list, drag-and-drop the parent job to the designer area.

  2. From the Jobs list, drag-and-drop the dependent (or child) job to the designer area.

  3. To indicate the parent job, the job that triggers the pipeline build, mouse over the Grey circle on the right of the Start node handle of the Start node. The cursor icon changes to the + cursor icon:

    In the above example, the Start node indicates the starting point of the pipeline. The Start node is available in all pipelines and can’t be removed. Job 1 is the parent job and Job 2 is the dependent job.

  4. Drag the cursor from the Gray circle Gray circle on the right of the Start node handle to the White circle White circle on the left side of the job's job node handle. An arrow line appears:

  5. Similarly, mouse-over the Blue circle Blue circle on the right side of the parent job's job node handle and drag-and-drop the arrow head over the White circle White circle on the left side of the child job's job node:

A dependency is now formed. In the above example, Job 2 is now dependent on Job 1. A build of Job 2 will run automatically after every Job 1 build is successful.

To delete a job node or a dependency, click to select it, and then click Delete Delete.

Create a One-to-Many Dependency

A one-to-many dependency is formed between one parent job and multiple child jobs. When a build of the parent job is successful, builds of child jobs run automatically.

To create a one-to-many dependency between jobs:

  1. From the Jobs list, drag-and-drop the parent job to the designer area.

  2. From the Jobs list, drag-and-drop all dependent (or child) jobs to the designer area:

    Here, Job 1 is the parent job and Job 2, Job 3, and Job 4 are the dependent jobs.
  3. To indicate the parent job, the job that triggers the pipeline build, mouse over the Gray circle Gray circle on the right of the Start node handle of the Start node. The cursor icon changes to the + cursor icon.

  4. Drag the cursor from the Gray circle Gray circle on the right of the Start node handle to the White circle White circle on the left side of the job node handle of the job. An arrow line appears:

  5. Similarly, mouse-over the Blue circle Blue circle on the right side of the job node handle of the parent job and drag-and-drop the arrow head over the White circle White circle on the left side of the job node of the child jobs:

A dependency is now formed. In the above example, Job 2, Job 3, and Job 4 are now dependent on Job 1. A build of Job 2, Job 3, and Job 4 runs automatically after every Job 1 build is successful.

To delete a job node or a dependency, click to select it, and then click Delete Delete.

Create a Many-to-One Dependency

A many-to-one dependency is formed between multiple parent jobs and one child job. When builds of all parent jobs are successful, a build of the child job runs automatically.

To create a many-to-one dependency on parent jobs with a child job:

  1. From the Jobs list, drag-and-drop all parent jobs to the designer area.

  2. From the Jobs list, drag-and-drop the dependent (or child) jobs to the designer area:

    Here, Job 2, Job 3, and Job 4 are the parent jobs and Job 5 is the dependent job.

  3. To indicate the parent job, the job that triggers the pipeline build, mouse over the Gray circle Gray circle on the right of the Start node handle of the Start node. The cursor icon changes to the + cursor icon.

  4. Drag the cursor from the Gray circle Grey circle on the right of the Start node handle to the parent job's White circle White circle on the left side of the job node handle. Repeat the steps for all parent nodes.

  5. Drag the cursor from the parent job's White circle White circle on the left side of the job node handle. An arrow line appears. Repeat the steps for all parent nodes.

  6. Similarly, mouse over the parent job's Blue circle Blue circle on the right side of the job node handle and drag-and-drop the arrow head over the dependent job's White circle White circle on the left side of the job node handle:

A dependency is now formed. In the above example, Job 5 is dependent on Job 2, Job 3, and Job 4. A build of Job 5 will run automatically after Job 2, Job 3, and Job 4 are successful.

To delete a job node or a dependency, click to select it, and then click Delete Delete.

Configure the Dependency Condition

When you create a dependency between a parent and a child job, by default, a build of the child job runs after the parent job’s build is successful. You can configure the dependency to run a build of the child job after the parent job’s build fails:

  1. In the pipeline designer, click to select the dependency condition arrow.

  2. In the pipeline designer toolbar, click Configure Configure.

  3. In the pipeline configuration flow editor, in Result Condition, select Successful, Failed, or Test Failed.

    You can also double-click the dependency arrow to open the pipeline configuration flow editor. You can’t configure the dependency condition from the Start node.

    Note:

    If you configure the pipeline using YAML, you'll have access to additional options that aren't available in the UI. See Set Dependency Conditions in Pipelines Using YAML.
  4. Click Apply.

Manage Pipelines

You can manage a pipeline by editing the pipeline diagram from the Configure Pipeline page:

Action How To

Design the pipeline diagram

In the Pipelines tab, for the pipeline whose diagram you want to edit, Configure Configure. On the Configuring Pipeline page, click Configure the Tools icon.

Run a pipeline

To run all jobs of a pipe in the defined order, in the Pipelines tab, click Build Build.

View a pipeline’s instances

When you trigger a pipeline, an instance of the pipeline is created. To view the instances, in the Pipelines tab, click the pipeline name.

View a pipeline's instance log
To see the status of a pipeline instance's jobs, click View Log on the Pipeline Instances page.

Note:

You can't use View Log to display logs that were created before 19.4.3. Those logs appear empty.

Edit a pipeline

In the Pipelines tab, for the pipeline you want to edit, Configure Configure. On the Configuring Pipeline page, click Edit Pipeline Settings Edit Pipeline Settings.

Delete a pipeline

In the Pipelines tab, click Delete Deletenext to the pipeline you want to delete.

When you delete a pipeline, you're removing the dependency or the order of job builds. The jobs aren’t being deleted.

View a Pipeline’s Instances

When a pipeline build is triggered, a pipeline instance is created and is available in the pipeline’s instances page.

To view a pipeline's instances, click its name in the Pipelines tab. The Pipeline Instances page displays the pipeline run history. For each pipeline instance, the page shows the pipeline diagram and its status.

A pipeline's status is determined its jobs' builds:

  • Success Success: Indicates that all the pipeline builds were successful.

  • Failed Failed: Indicates that a build of a job in the pipeline failed, causing the pipeline to fail too.

  • Canceled Canceled: Indicates the pipeline was canceled.

  • In Progress In Progress: Indicates a pipeline is in progress.

You can see the status of jobs in a pipeline instance by looking at the instance in the UI. In the pipeline diagram, the color of job nodes indicates the job’s status:

  • Green: The last build of the job was successful

  • White: A build of the job is running or hasn’t run yet

  • Red: The last build of the job failed

To see the full log for each build job in the pipeline instance, you'll need to navigate to each specific build job log. If you select a specific build job in the Pipeline Instance page and click on its build number, you'll go to the Build Details page, from which you can access the log for that build. Just click Build Log to see specific details for that build job.

You can select View Log on the Pipeline Instances page to see a historical record of actions taken by the pipeline. Sometimes the log is helpful to see why a pipeline didn't advance when you expected it to. You can use View Log to see who started the pipeline, when each build was run, and the status of each build job in the pipeline.

Deploy and Manage Your Application

You can use the Oracle Deployment build step to deploy an application extension to an Oracle Cloud Applications instance, a visual application to a Visual Builder instance, or other build artifacts like Java or Node.js applications to Oracle Java Cloud Service (JCS) or Oracle Application Container Cloud Service (ACCS).

Application lifecycle operations are also available for Application extensions and Visual Applications underneath their respective build step menus:

Some of these operations can also be managed from the activity menu in the Environments page's Deployments tab. From the Deployments tab , you can export data from, import data to, and undeploy a visual application that's deployed to your current identity domain's Visual Builder instance. If your visual application is deployed to another identity domain, you'll have to create and use a Visual Application build step to perform these operations. You can also delete an application extension in an Oracle Cloud Applications instance in the current identity domain using the Deployments tab. If the application extension is in another identity domain, you'll need to use a delete Application Extension build step to perform this operation.

Deployment Concepts and Terms

Here are some concepts and terms that this documentation uses to describe deployment functions and components in VB Studio.

Term Description

Deployment target

An instance of the target Oracle Cloud service, the service running Oracle Cloud Applications, or Visual Builder.

Continuous delivery

A method to automatically deploy a build artifact to the target service.

Package, Deploy, and Manage Application Extensions

From the Steps tab on the job's configuration page, you can create an Application Extension Package job that packages an app extension build artifact and an Application Extension Deploy job that deploys the build artifact to an Oracle Cloud Applications development instance, production instance, or any other instance. You can then add these jobs to a pipeline and run them in sequence.

Deployed app extensions can be viewed from the Deployments tab on the Environments page and can be deleted manually from there too, if they are deployed to an Oracle Cloud Applications instance that is in the same identity domain as VB Studio. If an app extension is deployed to an Oracle Cloud Applications instance that is in a different identity domain than VB Studio, you'll have to create and use an Application Extension build step to delete the deployed app extension.

Deploy an App Extension to a Oracle Cloud Applications Development Instance

When you create a project using the Application Extension template, several artifacts are created for you

  • A Git repository that contains the app extension's source code.
  • A Development environment that points to the development instance where your base Oracle Cloud Application is running.
  • Default build jobs that package and deploy the app extension's artifact to Oracle Cloud Application's development instance.
  • A pipeline to run the build sequence.
  • Optionally, a private workspace to edit the app extension in the VB Studio Designer.

You'll need to do some configuration for the build steps before you can use them to deploy the app extension's artifact to the Development environment. See Configure the Deployment Job for more information.

Deploy an App Extension to an Oracle Cloud Applications Production Instance

If you want to deploy an app extension to your Oracle Cloud Applications production instance, or any other instance, you'll need to set up separate packaging and deployment jobs for each — Visual Builder Studio does not create them for you. They're very similar to the default build jobs that are created from the Application Extension template. For these jobs, however, you'll also need to create a pipeline on your own to execute the build steps in sequence.

See Create and Configure Build Jobs for information about setting up these jobs for a production environment.

See Create and Configure a Pipeline for more information about setting up a pipeline.

View a Deployed App Extension

After the deployment job has run successfully, you can view the deployed app extension in the Deployments tab of the Environments page.

  1. In the navigation menu, click Environments Environments.
  2. Select the Oracle Cloud Application's environment.
  3. Click the Deployments tab.
  4. Click the Application Extensions toggle button.
  5. If the Oracle Cloud Application's a personal access token has expired or its access credentials have changed, provide the token or the credentials again.
  6. Expand the base Oracle Cloud Application to view its deployed app extensions.
    For each app extension, the page displays its ID, name, version, and status.

    Here's an example:

    In the packaging build step, if you didn't specify a version to overwrite the app extension's version, the Version column appends the build's timestamp to the version number and displays it in the <version_number>.<build_run_timestamp> format.

To open the Oracle Cloud Application with the deployed app extension, copy the application's base URL and paste it in a web browser.
Delete an App Extension

You can delete an app extension that's deployed to your current identity domain's Oracle Cloud Applications manually from the Deployments tab of its environment, or configure a build job to delete it.

To delete an app extension that's deployed to Oracle Cloud Applications of another identity domain, configure a build job and run it. You can't delete it manually.
Delete an App Extension Manually
  1. In the navigation menu, click Environments Environments.
  2. Select the environment where the app extension is deployed.
  3. Click the Deployments tab.
  4. Expand the base application's name.
  5. For the app extension to delete, click Actions Hamburger icon and select Delete.
  6. In the confirmation dialog box, click Delete.
Configure a Job to Delete an App Extension
To delete an app extension through a build job, you'll need either a personal access token or access credentials of a user who can access the Oracle Cloud Application's instance where the app extension is deployed.
  1. In the navigation menu, click Builds Builds.
  2. In the Jobs tab, click + Create Job.
  3. In the New Job dialog box, in Name, enter a unique name.
  4. In Description, enter the job's description.
  5. In Template, select the System Default OL7 for Visual Builder Build VM template.
  6. Click Create.
  7. On the Job Configuration page, click Configure the Tools icon.
  8. Click the Steps tab.
  9. From Add Step, select Application Extension, and then select Delete.
  10. In Instance, select the Oracle Cloud Applications instance where the application is deployed.
  11. In Authorization, select one of these options.
    • To authenticate using a username and a password, select Use Credentials. In Username and Password, enter the user's credentials who can connect to the Oracle Cloud Applications instance.
    • To authenticate using the user's personal access token, select Use Access Token. Click Set Access Token, upload the token file, and click OK.

      Note that personal access tokens have a short life. Before uploading, ensure that the access token isn't expired. If you need to generate a personal access token, see Get a Visual Builder Users Personal Access Token.

  12. In Base Application, Name, and Version, enter the app extension's base application, name, and version.
    You can find the details on the Deployments tab of the environment where the app extension is deployed.

    Example:

  13. Click Save.
  14. To run a build, click Build Now.

Package, Deploy, and Manage Visual Applications

From the Steps tab on the job's configuration page, you can create a Visual Application Package job that packages a visual application build artifact and a Visual Application Deploy job that deploys the build artifact to a Visual Builder development instance, production instance, or any other instance. You can then add these jobs to a pipeline and run them in sequence.

You can deploy a visual application to a standalone Visual Builder instance or to a Visual Builder instance that's part of Oracle Integration.

It’s important to keep these things in mind before you deploy a visual application to a Visual Builder instance:

  • The Visual Builder instance must be version 19.4.3.1, or later.
  • VB Studio doesn't support deployment to Visual Builder available in Oracle Integration Generation 2, so you cannot add it to a VB Studio environment.
  • To ensure that business objects work properly, Visual Builder administrator must manually add the VB Studio hostname to the whitelist for each Visual Builder instance. See Allow Other Domains Access to Services in Administering Oracle Visual Builder.

Deployed visual applications can be viewed from the Deployments tab on the Environments page and can be undeployed manually from there too, if they are deployed to a Visual Builder instance that is in the same identity domain as VB Studio. If a visual application is deployed to a Visual Builder instance that is in a different identity domain than VB Studio, you'll have to create and use a Visual Application build step to undeploy the deployed visual application.

Deploy a Visual Application to a Development Visual Builder Instance

When you create a project using the Visual Application template, several artifacts are created for you

  • A Git repository that contains the visual application's source code.
  • A Development environment that points to the Visual Builder development instance.
  • Default build jobs that package and deploy the visual application's artifact to the Visual Builder development instance.
  • A pipeline to run the build sequence.
  • Optionally, a private workspace to edit the visual application in the VB Studio Designer.

You'll need to do some configuration for the build steps before you can use them to deploy the application's build artifact to the Development environment. See Configure the Packaging Job and Configure the Deployment Job for more information.

Deploy a Visual Application to a Test or Production Visual Builder Instance

If you want to deploy visual applications to your Visual Builder production instance, or any other instance, you'll need to set up separate packaging and deployment jobs for each. They're very similar to the default build jobs that are created from the Visual Application template. For these jobs, however, you'll also need to create a pipeline on your own to execute the build steps in sequence.

See Create and Configure Production Build Jobs for information about setting up these jobs for a production environment.

View a Deployed Visual Application

After the deployment job runs successfully, you can view the deployed application in the Deployments tab of the Environments page.

  1. In the navigation menu, click Environments Environments.
  2. Select the Visual Builder 's environment.
  3. Click the Deployments tab.
  4. Click the Visual Applications toggle button.
  5. If the Visual Builder instance is from a different identity domain, provide the personal access token or its access credentials.
  6. Expand the app's name to see the deployed app's link.

    The Deployments tab displays the applications you've deployed from the current project. It doesn't show applications deployed by other users of the project, or applications deployed from other projects.

    Example:

Undeploy a Visual Application

You can undeploy a visual application that's deployed to your current identity domain's Visual Builder instance manually from the Deployments tab of its environment, or configure a build job to undeploy it.

To undeploy a visual application that's deployed to a Visual Builder instance in another identity domain, configure a build job and run it. You can't undeploy it manually.

Undeploy a Visual Application Manually
  1. In the navigation menu, click Environments Environments.
  2. Select the environment where the visual application is deployed.
  3. Click the Deployments tab.
  4. Expand the application.
  5. For the visual application to undeploy, click Actions Hamburger icon and select Undeploy.
  6. In the confirmation dialog box, click Undeploy.
Configure a Job to Undeploy a Visual Application
To undeploy a visual application through a build job, you'll need either a personal access token or access credentials of a user who can access the Visual Builder 's instance where the visual application is deployed.
  1. In the navigation menu, click Builds Builds.
  2. In the Jobs tab, click + Create Job.
  3. In the New Job dialog box, in Name, enter a unique name.
  4. In Description, enter the job's description.
  5. In Template, select the System Default OL7 for Visual Builder template.
  6. Click Create.
  7. Click Configure the Tools icon.
  8. Click the Steps tab.
  9. From Add Step, select Visual Application, and then select Undeploy.
  10. In Instance, select the Visual Builder 's instance where the application is deployed.
  11. In Authorization, select one of these options:
    • To authenticate using a username and a password, select Use Credentials. In Username and Password, enter the user's credentials who can connect and undeploy from the Visual Builder 's instance.
    • To authenticate using the user's personal access token, select Use Access Token. Click Set Access Token, upload the token file, and click OK.

      Note that personal access tokens have a short life. Before uploading, ensure that the access token isn't expired. If you need to generate a personal access token, see Get a Visual Builder Users Personal Access Token.

  12. In Application URL Root and Application Version, enter the visual application's root URL and its version.
    You can find the application's root URL and its version from the Deployments tab of the environment where the visual application is deployed.

    Example:

  13. Click Save.
  14. To run a build, click Build Now.
Lock, Unlock, or Roll Back a Deployed Visual Application

You can lock and unlock deployed visual applications, and the web applications that they contain, as well as roll back a deployed visual application. You would lock and unlock a visual application when you have maintenance tasks to complete and don’t want users to access the web application in the deployed visual application during the maintenance period.

These visual applications lifecycle operations (lock, unlock, roll back) can be managed manually from the Environments page through the Deployments tab or they can be managed with Visual Application build steps.

The Rollback menu option is available to use when you have deployed your visual application more than once without including the application version in the URL. That is, "live" appears in the application URL, rather than the application version. If, for example, you've deployed three versions of your visual application to https://host/app-name/live/index.html you can roll back to version 2 and then to version 1 by using the Rollback menu option.

If you deploy a visual application to a Visual Builder instance in the same identity domain as your VB Studio instance and you don't include the application version in the URL, you can perform this task from the Deployments tab of your Environments page.

Visual Applications Deployment Tab in Environments Page

Under the following conditions, you'll need to add and configure steps in a build job to lock, unlock, or roll back a visual application:

  • If you deployed your visual application to a different identity domain
  • If the URL includes the application version

After you create and configure the lifecycle management build steps, you may want to add them, in some combination, to the pipeline you created for the packaging and deployment steps for that testing or production instance. By integrating these build steps in your deployment process, you'll ensure a more robust and error-free process when upgrades are done through deployment.

Deploy Build Artifacts to Oracle Cloud Services

You can configure an Oracle Deploy build step to deploy your project's build artifacts like Java and Node.js applications to Oracle Cloud Services, including Oracle Java Cloud Service (JCS) and Oracle Application Container Cloud Service (ACCS).

Before creating a build step for deployment, you must first create an environment and add the JCS and ACCS instances to it. These instances will be used for your deployment targets. If you do not add them in the Environments page, you will not be able to select them in the build step. See Set Up an Environment for information about creating an environment and adding instances to it.

You can either add a build step that deploys the build artifact(s) to the job that creates and packages the artifact(s) or you can create a separate job for each task. If you use separate jobs, you can create a pipeline that begins with a job that builds and packages the application, followed by a job that deploys the build artifact(s) to the desired target environment. Using pipelines allows you the flexibility to add testing and other tasks to the flow.

Deploy an Application to JCS

You can create a job that copies build artifacts generated by another job and deploys those artifacts to a JCS target instance:

  1. In the navigation menu, click Builds Builds.
  2. In the Jobs tab, click + Create Job.
  3. In the New Job dialog box, in Name, enter a unique name.
  4. In Description, enter the job's description.
  5. In Template, select the Build VM template.
  6. Click Create.
  7. In the Job Configuration page, in the Before Build tab, select Copy Artifacts from the Add Before Build Action dropdown.
  8. Select the job that produces the artifact from the From job dropdown and the Last successful build from the Which build dropdown, then click Save.
  9. In the Steps tab, click the Add Step dropdown and select Oracle Deployment.
  10. In the Application Name field, enter the name that will be used by the target service to identify your application.
  11. In the Deployment Target dropdown, select the JCS instance where you want to deploy the application.
    If you don't see the instance that you want to deploy to, you'll need to go to the Environments page and define a new instance. After you do that, it will show up as a target in the dropdown.
  12. In the Deploy to Java dialog, specify the desired version and protocol, then enter the HTTPS port number, the username, and the password.
  13. Click Find Targets and select the server you want to deploy to from the list of available servers or clusters.
  14. Click OK.
  15. In the Artifact field, enter the path to the artifact that you want to deploy.
  16. Click Save.
  17. To run the job, from the Builds page, click Build Now and the job will execute, first copying the artifact, then deploying it to the selected JCS target instance.
At this point, you probably want to create a pipeline that flows a series of jobs that builds and packages the artifact, then retrieves it and deploys it to the desired JCS target instance.
Access a Deployed Application

You can access an application deployed to an Oracle Cloud service from the console of that target service. Here are some ways to get the deployed application’s URL. You'll need to enter your identity domain name and your credentials, if you’re prompted to do so.

The Deployments tab on the Environments page shows deployed application extensions and deployed visual applications. Application Extensions show deployments for all projects associated with this environment. Visual Applications show deployments only for the current project.
Action How To

Create the application’s URL that’s deployed to JCS

  1. Use the JCS View a Service Instance API to get the Content URL and examine the response body output to find the content_url.

    Example:

    curl -i -X GET -u jdoe@example.com:my_password -H "X-ID-TENANT-NAME:exampleidentitydomain" https://jaas.oraclecloud.com/jaas/api/v1.1/instances/exampleidentitydomain/exampleservice

    For more information about the REST API, see REST API for Oracle Java Cloud Service in Using Oracle Java Cloud Service.

    You have to use basic authentication to call the REST API. You can use cURL or a browser REST add-on, such as Postman for Google Chrome, to make this call.

  2. Get the context root of the application from the application.xml deployment descriptor for EAR deployments or from the web.xml deployment descriptor for WAR deployments.

    If there is no such descriptor, you’ll need to get the context root from the WebLogic Console.

    1. Open the WebLogic Console of the JCS instance. You can access the console from the Java Service link of the JCS deployment configuration.

    2. Click Deployments in the Domain Structure pane.

    3. Click the deployed application name in the Deployments table.

    4. In the Overview tab, copy the value displayed by Context Root.

    Note that the <host>:<port> referenced in the WebLogic Console is local to the JCS instance, so you’ll need the externally available IP address or the host name of the JCS instance VM to access the deployed application.

  3. Join the content URL and the context root of the application to construct the application URL.

    For example, if the content URL is http://129.130.131.132 and the context root is /deploy4214351085908057349, the application’s URL would be http://129.130.131.132/deploy4214351085908057349.

For more information, see Accessing an Application Deployed to an Oracle Java Cloud Service Instance in Using Oracle Java Cloud Service.

Deploy an Application to ACCS

You can create a job that copies build artifacts generated by another job and deploys those artifacts to an Oracle ACCS target instance:

  1. In the navigation menu, click Builds Builds.
  2. In the Jobs tab, click + Create Job.
  3. In the New Job dialog box, in Name, enter a unique name.
  4. In Description, enter the job's description.
  5. In Template, select the Build VM template.
  6. Click Create.
  7. In the Job Configuration page, in the Before Build tab, select Copy Artifacts from the Add Before Build Action dropdown.
  8. Select the job that produces the artifact from the From job dropdown and the Last successful build from the Which build dropdown, then click Save.
  9. In the Steps tab, click the Add Step dropdown and select Oracle Deployment.
  10. In the Application Name field, enter the name that will be used by the target service to identify your application.
  11. In the Deployment Target dropdown, select the environment where you want to deploy the application.
    If you don't see the instance that you want to deploy to, you'll need to go to the Environments page and define a new instance. After you do that, it will show up as a target in the dropdown.
  12. In the ACCS Properties:
    1. Select the desired Runtime (Java, Java EE, Node, or PHP).
    2. Select the Subscription (Hourly or Monthly).
    3. To override the commands of the manifest file in the artifact ZIP file, select Include ACCS Manifest. For example, you can override the deployed application's version number at the time of deployment.
      In ACCS Manifest, enter the contents of the manifest file. The field opens in a code editor component so you can use the code editor features.
    4. To enter the contents of the deployment descriptor, select Include ACCS Deployment and enter the commands in ACCS Deployment.
      You can also enter commands to override the deployed application's container's configuration (such as RAM) at deployment time. You can use code editor features to edit the contents of this field.

      For more information about the ACCS metadata files, see Creating Metadata Files in Developing for Oracle Application Container Cloud Service.

  13. Click OK.
  14. In the Artifact field, enter the path to the artifact that you want to deploy.
  15. Click Save.
  16. To run the job, from the Builds page, click Build Now and the job will execute, first copying the artifact, then deploying it to the selected Oracle ACCS target instance.
At this point, you probably want to create a pipeline that flows a series of jobs that builds and packages the artifact, then retrieves it and deploys it to the desired Oracle ACCS target instance.

Automatically Deploy a Build Artifact

You can use a pipeline to automatically deploy a new version of a build artifact as soon as it becomes available. You can also configure a job to trigger a deployment build step and deploy an artifact as a post-build action.

Manage Oracle Cloud Service Deployments

By using the Oracle Java Cloud Service console, you can start and stop a deployment, redeploy an application, or undeploy a deployment.

Action How To

Start or stop the application

Open and use the target service’s console to start or stop the deployed application on the target service.

Redeploy the application

If you’ve made changes to the source code or the build generated a new artifact, you can rerun the deploy build step to redeploy the application to the target service.

View logged deployment information

In the build log, locate and view the deployment section.

Undeploy a deployed application

Open and use the target service’s console to stop and then undeploy the deployed application on the target service.

Create and Configure Jobs and Pipelines Using YAML

YAML (YAML Ain't Markup Language) is a human-readable data serialization language that is commonly used for configuration files. To find more about YAML, see https://yaml.org/.

In VB Studio, you can use a YAML file (a file with .yml extension) to store a job or pipeline configuration in any of the project's Git repositories. The build system constantly monitors the Git repositories and, when it detects a YAML file, creates or updates a job or a pipeline with the configuration specified in the YAML file.

Here's an example with a YAML file that configures a job:

job:
  name: MyFirstYAMLJob
  vm-template: Basic Build VM Template
  git:
  - url: "https://mydevcsinstance-mydomain.developer.ocp.oraclecloud.com/mydevcsinstance-mydomain/s/mydevcsinstance-mydomain_my-project_902/scm/employee.git"
    branch: master
    repo-name: origin
  steps:
  - shell:
      script: "echo Build Number: $BUILD_NUMBER"
  - maven:
      goals: clean install
      pom-file: "employees-app/pom.xml"
  after:
  - artifacts:
      include: "employees-app/target/*"
  settings:
    - discard-old:
        days-to-keep-build: 5
        builds-to-keep: 10
        days-to-keep-artifacts: 5
        artifacts-to-keep: 10

What Are YAML Files Used for in VB Studio?

All YAML files must reside in the .ci-build directory in the root directory of any hosted Git repository's master branch. YAML files in other branches will be ignored. Any text file that has a .yml file extension and resides in the master branch's .ci-build directory is considered to be a YAML configuration file. Each YAML file can contain configuration data for exactly one job or one pipeline. You can have YAML files in multiple Git repositories, or use a separate Git repository to host all your YAML configuration files. You cannot, however, use an external Git repository to host YAML files. Because these configuration files are stored using Git, you can track changes made to the job or pipeline configuration and, if a job or pipeline is deleted, you can use the configuration file to recreate it.

The build system constantly monitors the project's Git repositories. When it detects an update to a file with the .yml extension in the .ci-build directory of a Git repository's master branch, it scans the file to determine if it is a job or a pipeline, and creates or updates the corresponding job or pipeline. First, it verifies whether the job or the pipeline of the same name (as in the configuration file) exists on the Builds page. If the job or the pipeline exists, it's updated. If the name of the job or pipeline has changed in the configuration file, it's renamed. If the job or the pipeline doesn't exist, it's created.

Note:

Jobs and pipelines created with YAML can't be edited on the Builds page. They must be edited using YAML. Similarly, jobs and pipelines created on the Builds page can't be edited using YAML.

YAML stores data as a key-value pair in the field: value format. A hyphen (-) before a field identifies it as an array or a list. It must be indented to the same level as the parent field. To indent, always use spaces, not tabs. Make sure that number of indent spaces before a field name matches the number of indented spaces in the template. YAML is sensitive to number of spaces used to indent fields. Also, the field names in a YAML file are similar to the field names in the job configuration user interface:

  name: MyFirstYAMLJob
  vm-template: Basic Build VM Template
  git:
  - url: "https://mydevcsinstance-mydomain/.../scm/employee.git"
  steps:
  - shell:
      script: "echo Build Number: $BUILD_NUMBER"
  - maven:
      goals: clean install
      pom-file: "employees-app/pom.xml"

If you're editing a YAML file on your computer, always use a text editor with the UTF-8 encoding. Don't use a word processor.

Here are some additional points to consider about YAML files before you begin creating or editing them:

  • The name field in the configuration file defines the job's or pipeline's name. If no name is specified, the build system creates a job or a pipeline with name as <repo-name>_<name>, where repo-name is the name of the Git repository where the YAML file is hosted and <name>.yml is the name of the YAML file.

    For example, if the YAML file's name is MyYAMLJob and it's hosted in the YAMLJobs Git repository, then the job's or pipeline's name would be YAMLJobs_MyYAMLJob.

    If you add the name field later, the job or pipeline will be renamed. Its access URL will also change.

  • Each job's configuration must define the vm-template field.
  • To define a string value, you may or may not use quotes. If any string values contain special characters, you should always enclose the values with quotes.

    Here are some examples of special characters: *, :, {, }, [, ], ,, &, #, ?, |, -, <, >, =, !, %, @, `.

    You may use single quotes (' ') or double quotes (" "). To include a single quote in a single quoted string, escape the single quote by prefixing it with another single quote. For example, to set Don's job in the name field, use name=Don''s job in your YAML file. To use a double quote in a double quoted string, escape the double quote with a backslash (\) character. For example, to set Don"s job in the name field, use name=Don\"s job in your YAML file. No need to escape backslashes in a single quoted string.

  • All named passwords must be specified in quotes, unless you're using a password parameter.

    Here's an example that uses a named password:

      params:
      - string:
          name: myUserName
          value: "don.developer"
          description: My Username
      steps:
      - docker-login:
           username: $myUserName
           password: "#{PSSWD_Docker}"

    Here's an example that uses a password parameter:

      params:
      - string:
          name: myUserName
          value: "don.developer"
          description: My Username
        - password:
            name: myPwd
            password: #{PSSWD_Docker}
            description: Defining Build Password
      steps:
        - docker-login:
             username: $myUserName
             password: $myPwd
  • If you specify a field name but don't specify a value, YAML assumes the value to be null. This can cause errors. If you don't need to define a value for a field, remove the field name.

    For example, if you don't want to define Maven goals and use the default clean install, remove the goals field. The following YAML code can cause error because goals isn't defined:

      steps:
      - shell:
          script: "echo Build Number: $BUILD_NUMBER"
      - maven:
          goals: 
          pom-file: "employees-app/pom.xml"
  • You don't need to define every one of the job's fields in the YAML file. Just define the ones you want to configure or change from the default values, and make sure that you're adding the parent field(s) when you define a child field:
      steps:
      - maven:
          pom-file: "employees-app/pom.xml"
  • To run a build of the job automatically when its Git repository is updated, use the auto field or set build-on-commit to true.

    For the current Git repository, using auto is equivalent to setting build-on-commit to true. So, don't use auto and build-on-commit: true together.

    Here's an example that uses auto:

      name: MyFirstYAMLJob
      vm-template: Basic Build VM Template
      auto:
        branch: patchset_1
    

    If you use auto, don't specify the Git repository URL. The job automatically tracks the Git repository where the YAML file is committed.

    Here's an example that uses build-on-commit:

      name: MyFirstYAMLJob
      vm-template: Basic Build VM Template
      git:
      - url: "https://mydevcsinstance-mydomain.developer.ocp.oraclecloud.com/mydevcsinstance-mydomain/s/mydevcsinstance-mydomain_my-project_902/scm/employee.git"
        branch: patchset_1
        build-on-commit: true

    A commit when pushed to the patchset_1 branch triggers a build of the MyFirstYAMLJob job.

  • To add comments in the configuration file, precede the comment with the pund sign (#):
      steps:
      # Shell script
      - shell:
          script: "echo Build Number: $BUILD_NUMBER"
    
  • On the Builds page, to configure an existing job or a pipeline, click its Configure button or icon. If the job or the pipeline was created in YAML, VB Studio opens the YAML file in the code editor on the Git page so you can view or edit the configuration.

REST API for Accessing YAML Files

You can use an API testing tool, such as Postman, or curl commands to run REST API methods. To run curl commands, either download curl to your computer or use the Git CLI to run curl commands.

To create the REST API URL, you need your VB Studio user name and password, the base URL of your instance, the unique organization ID, and the project ID, which you can get from any of the project's Git repository URLs.

In a Git repository URL, the project's ID is located before /scm/<repo-name>.git. For example, if https://alex.admin%40example.com@mydevcsinstance-mydomain.developer.ocp.oraclecloud.com/mydevcsinstance-mydomain/s/mydevcsinstance-mydomain_my-project_123/scm/NodeJSDocker.git is the Git repository's URL in a project, the project's unique ID will be mydevcsinstance-mydomain_my-project_123.

How Do I Validate a Job or Pipeline Configuration?

To validate a job (or pipeline) configuration, use this URL with the syntax shown, passing in the local (on your computer) YAML file as a parameter:

https://<base-url>/<identity-domain>/rest/<identity-domain>_<unique-projectID>/cibuild/v1/yaml/validate

Here's an example with a curl command that validates a job configuration on a Windows computer:

curl -X POST -H "Content-Type: text/plain" --data-binary @d:/myApps/myPHPapp/.ci-build/my_yaml_job.yml -u alex.admin@example.com:My123Password https://mydevcsinstance-mydomain.developer.ocp.oraclecloud.com/myorg/rest/myorg_my-project_1234/cibuild/v1/yaml/validate

Here's an example with a curl command that validates a pipeline configuration on a Windows computer:

curl -X POST -H "Content-Type: text/plain" --data-binary @d:/myApps/myPHPapp/.ci-build/my_yaml_pipeline.yml -u alex.admin@example.com:My123Password https://mydevcsinstance-mydomain.developer.ocp.oraclecloud.com/myorg/rest/myorg_my-project_1234/cibuild/v1/yaml/validate

Create a Job or a Pipeline Without Committing the YAML File

You can create a job or pipeline without first committing its YAML file to your project's Git repository. To do so, use a URL with this syntax, passing in a local (on your computer) YAML file as a parameter:

https://<base-url>/<identity-domain>/rest/<identity-domain>_<unique-projectID>/cibuild/v1/yaml/cibuild/yaml/import

VB Studio will read the YAML job (or pipeline) configuration and, if no errors are detected, create a new job (or pipeline). The job (or pipeline) must be explicitly named in the YAML configuration. After the job (or pipeline) has been created, you can edit its configuration on the Builds page. If errors are detected, the job (or pipeline) will not be created and the Recent Activities feed will display any error messages.

Here's an example that shows how to use a curl command with a YAML file on a Windows computer to create a job:

curl -X POST -H "Content-Type: text/plain" --data-binary @d:/myApps/myPHPapp/my_PHP_yaml_job.yml -u alex.admin@example.com https://mydevcsinstance-mydomain.developer.ocp.oraclecloud.com/myorg/rest/myorg_my-project_1234/cibuild/v1/yaml/import

You'll be prompted for the password:

Enter host password for user 'alex.admin':

How Do I Use YAML to Create or Configure a Job?

You can use YAML for creating a new job or configuring an existing one:

  1. Clone the Git repository with the YAML file to your computer or to the location whare you want to host it.
  2. Create a file with the job's YAML configuration.
  3. Save the file with the .yml extension in the .ci-build directory at the root of the cloned Git repository: .ci-build/my_yaml_job.yml
  4. Validate the local YAML file. See How Do I Validate a Job or Pipeline Configuration?.
    Resolve any errors.
  5. Commit and push the file to the project's Git repository.
  6. Open the Project Home Project Home page and, in the Recent Activities Feed, verify that the YAML file' and job were created.
    If there are any validation issues with the YAML file, a notification with a View Error link is displayed. Click the View Error link to review the error messages. Then update the YAML file and commit it again.
  7. Click the job's name to open it in the Builds page.
You can create the job configuration file using the code editor on the Git page too:

If you create the YAML file this way, you won't be able to validate it without committing it first. Commit the file and check the Recent Activities Feed on the Project Home page for any errors.

What Is the Format for a YAML Job Configuration?

In a YAML job configuration, any field with a value of "" accepts a string value that is empty by default. "" is not a valid value for some fields, such as name, vm-template, and url. If you want a field to use its default value, remove the field from the YAML file.

When you configure a job, fields such as name, description, vm-template, and auto must precede groups like git, params, and steps.

Here's a job's YAML configuration format with the default values:

job:
  name: ""
  description: ""
  vm-template: ""          # required
  auto: false              # true implies branch: master; otherwise, set branch explicitly
  auto:
    branch: mybranch
  from-job: ""             # create job as copy of another job; ignored after creation
  for-merge-request: false
  git:
  - url: ""                # required
    branch: "master"
    repo-name: "origin"
    local-git-dir: ""
    included-regions: ""
    excluded-regions: ""
    excluded-users: ""
    merge-branch: ""
    config-user-name: ""
    config-user-email: ""
    merge-from-repo: false
    merge-repo-url: ""
    prune-remote-branches: false
    skip-internal-tag: true
    clean-after-checkout: false
    update-submodules: false
    use-commit-author: false
    wipeout-workspace: false
    build-on-commit: false
  params:
  # boolean, choice, and string parameters can be specified as string values of the form - NAME=VALUE
  #   the VALUE of a boolean parameter must be true or false, e.g., - BUILD_ALL=true
  #   the VALUE of a choice parameter is a comma-separated list, e.g., - PRIORITY=NORMAL,HIGH,LOW
  #   the VALUE of a string parameter is anything else, e.g., - URL=https://github.com
  # Alternatively, parameters can be specified as objects:
  - boolean:
      name: ""                # required
      value: true             # required
      description: ""
  - choice:
      name: ""                # required
      description: ""
      choices: []             # array of string value choices; at least one required
  - merge-request:
      params:
      - GIT_REPO_BRANCH=""    # required
      - GIT_REPO_URL=""       # required
      - MERGE_REQ_ID=""
  - password:
      name: ""                # required
      password: ""            # required - recommended to use named password reference like "#{NAME}"
      description: ""
  - string:
      name: ""                # required
      value: ""               # required
      description: ""
  before:
  - copy-artifacts:
      from-job: ""
      build-number: 1                 # requires which-build: SPECIFIC_BUILD
      artifacts-to-copy: ""
      target-dir: ""
      which-build: "LAST_SUCCESSFUL"  # other choices: LAST_KEEP_FOR_EVER, UPSTREAM_BUILD, SPECIFIC_BUILD, PERMALINK, PARAMETER
      last-successful-fallback: false
      permalink: "LAST_SUCCESSFUL"    # other choices: LAST, LAST_SUCCESSFUL, LAST_FAILED, LAST_UNSTABLE, LAST_UNSUCCESSFUL
                                      # other choices require which-build: PERMALINK
      param-name: "BUILD_SELECTOR"    # requires which-build: PARAMETER
      flatten-dirs: false
      optional: false
  - oracle-maven:
      otn-login: ""                   # required
      otn-password: ""                # required
      server-id: ""
      settings.xml: ""
  - security-check:
      perform-analysis: false         # true to turn on security dependency analyzer of maven builds
      create-issues: false            # true to create issue for every affected pom file
      fail-build: false               # true to fail build if vulnerabilities detected
      severity: "low"                 # low (CVSS >= 0.0), medium (CVSS >= 4.0), high (CVSS >= 7.0)
      confidence: "low"               # low, medium, high, highest
      product: ""                     # required if create-issues true; "1" for Default
      component: ""                   # required if create-issues true; "1" for Default
  - ssh:
      config:
        private-key: ""               # optional if ssh-tunnel: password specified
        public-key: ""
        passphrase: ""
        server-public-key: ""         # leave empty to skip host verification
        setup-ssh: false              # true if setup files in ~/.ssh for cmd line tools
        ssh-tunnel: false
        username: ""                  # required if ssh-tunnel true
        password: ""                  # optional if ssh-tunnel true and private-key specified
        local-port: 0                 # required if ssh-tunnel true
        remote-host-name: "localhost" # optional if ssh-tunnel true
        remote-port: 0                # required if ssh-tunnel true
        ssh-host-name: ""             # required if ssh-tunnel true (name or IP)
  - sonarqube-setup:
      sonar-server: ""                # required Server Name as configured in Builds admin
      project-key: ""                 # optional key must be globally unique
      project-name: ""                # optional
  - xvfb:
      display-number: "0"
      screen-offset: "0"
      screen-dimensions: "1024x768x24"
      timeout-in-seconds: 0
      more-options: "-nolisten inet6 +extension RANDR -fp /usr/share/X11/fonts/misc"
      log-output: true
      shutdown-xvfb-after: true
  steps:
  - ant:
      build-file: ""
      targets: ""
      properties: ""
      java-options: ""
  - application-ext-packaging:
      build-artifact: "extension.vx" # optional, defaults to 'extension.vx'
      version: ""
  - bmccli:
      private-key: ""
      user-ocid: ""            # required
      fingerprint: ""          # required
      tenancy: ""              # required
      region: "us-phoenix-1"   # current valid regions are: us-phoenix-1, us-ashburn-1, eu-frankfurt-1, uk-london-1
                               # more may be added - check OCI configuration
  - docker-build:              # docker commands require vm-template with software bundle 'Docker'
      source: "DOCKERFILE"     # other choices: DOCKERTEXT, URL
      path: ""                 # docker file directory in workspace
      docker-file: ""          # Name of docker file; if empty use Dockerfile
      options: ""
      image:
        registry-host: ""
        registry-id: ""
        image-name: ""         # required
        version-tag: ""
      docker-text: ""          # required if source: DOCKERTEXT otherwise not allowed
      context-root-url: ""     # required if source: URL otherwise not allowed
  - docker-image:
      options: ""
      image:
        registry-host: ""
        registry-id: ""
        image-name: ""
        version-tag: ""
  - docker-load:
      input-file: ""           # required
  - docker-login:
      registry-host: ""
      username: ""             # required
      password: ""             # required
  - docker-push:
      options: ""
      image:
        registry-host: ""      # required
        registry-id: ""
        image-name: ""         # required
        version-tag: ""
  - docker-rmi:
      remove: "NEW"            # other options: ONE, ALL
      options: ""
      image:                   # only if remove: ONE
        registry-host: ""      # required
        registry-id: ""
        image-name: ""         # required
        version-tag: ""
  - docker-save:
      output-file:             # required
      image:
        registry-host: ""      # if omitted Docker Hub is assumed
        registry-id: ""
        image-name: ""         # required
        version-tag: ""
  - docker-tag:
      source-image:
        registry-host: ""      # required
        registry-id: ""
        image-name: ""         # required
        version-tag: ""
      target-image:
        registry-host: ""      # required
        registry-id: ""
        image-name: ""         # required
        version-tag: ""
  - docker-verson:
      options: ""
  - fn-build:
      build-args: ""
      work-dir: ""
      use-docker-cache: true
      verbose-output: false
      registry-host: ""
      username: ""
  - fn-bump:
      work-dir: ""
      bump: "--patch"          # other choices: "--major", "--minor"
  - fn-deploy:
      deploy-to-app: ""        # required
      build-args: ""
      work-dir: ""
      deploy-all: false
      verbose-output: false
      use-docker-cache: true
      no-version-bump: true
      do-not-push: true
      registry-host: ""
      username: ""
      api-url: ""              # required
  - fn-oci:
      compartment-id: ""       # required
      provider: ""
      passphrase: ""           # required
  - fn-push:
      work-dir: ""
      verbose: false
      registry-host: ""
      username: ""
  - fn-test:
      work-dir: ""
      verbose-output: false
      test-all-functions: false
  - fn-version: {}
  - gradle:
      use-wrapper: false
      wrapper-gradle-version: ""            # ignored unless use-wrapper: true
      make-executable: false                # ignored unless use-wrapper: true
                                            # must set make-executable: true if wrapper doesn't already exist
      from-root-build-script-dir: false     # ignored unless use-wrapper: true
      root-build-script: ""                 # ignored unless from-root-build-script-dir: true; script directory
      tasks: "clean build"
      build-file: "build.gradle"
      switches: ""
      use-workspace-as-home: false
      description: ""
      use-sonar: false                      # if true sonarqube-setup must be configured
  - maven:
      goals: "clean install"
      pom-file: "pom.xml"
      private-repo: false
      private-temp-dir: false
      offline: false
      show-errors: false
      recursive: true
      profiles: ""
      properties: ""
      verbosity: NORMAL                # other choices: DEBUG, QUIET
      checksum:  NORMAL                # other choices: STRICT, LAX
      snapshot:  NORMAL                # other choices: FORCE, SUPPRESS
      projects: ""
      resume-from: ""
      fail-mode:  NORMAL               # other choices: AT_END, FAST, NEVER
      make-mode:  NONE                 # other choices: DEPENDENCIES, DEPENDENTS, BOTH
      threading: ""
      jvm-options: ""
      use-sonar: false                 # if true, sonarqube-setup must be configured
  - nodejs:
      source: SCRIPT                   # other choice: FILE
      file: ""                         # only if source: FILE
      script: ""                       # only if source: SCRIPT
  - oracle-deployment:                 # currently Visual Applications, Application Extensions, and JCS using REST are supported
      environment-name: ""             # required, scopes the service-name
      service-name: ""                 # required, the service instance type determines the deployment type
      username: "Morty"                # required if Visual Application or Application Extension deployment and access-token not specified
                                       # required if JCS deployment, and then it is the weblogic username
      password: "#{MORTY_PASSWORD}"    # required if Visual Application or Application Extension deployment and access-token not specified
                                       # required if JCS, and then it is the weblogic user's password
      access-token: ""                 # required if Visual Application or Application Extension deployment and username and password not specified
      application-version: "1.0"       # optional if Visual Application (defaults from visual-application.json), else n/a
      application-profile: "myprofile" # optional if Visual Application, else n/a
      include-application-version-in-url: true  # required if Visual Application, other choice: false
      data-management: "USE_CLEAN_DATABASE"     # required if Visual Application, other choice: "KEEP_EXISTING_ENVIRONMENT_DATA" 
      sources: "mysources.zip"         # optional if Visual Application (defaults to build/sources.zip), else unused
      build-artifact: "myapp.zip"      # optional if Visual Application (defaults to build/built-assets.zip), else unused
                                       # required if Application Extension
                                       # required if JCS
      application-name: "MyApp"        # required if Application Extension or JCS, else n/a
      weblogic-version: "12.2.x"       # required if JCS (one of 12.2.x or 12.1.x)
      https-port: "7002"               # required if JCS
      admin-port: "9001"               # required if JCS
      protocol: "REST"                 # required if JCS (one of REST, REST1221, SSH)
      targets: "mytarget1, mytarget2"  # required if JCS, one or more names of target service or cluster, comma-separated
  - psmcli:
      username: ""                     # required
      password: ""                     # required
      identity-domain: ""              # required
      region: US                       # other choice: EMEA
      output-format: JSON              # other choice: HTML
  - shell:
      script: ""
      xtrace: true
      verbose: false                   # both verbose and xtrace cannot be true
      use-sonar: false                 # if true sonarqube-setup must be configured
  - sqlcl:
      username: ""
      password: ""
      credentials-file: ""
      connect-string: ""
      source: SQLFILE                  # other choice: SQLTEXT
      sql-file: ""                     # only if source: SQLFILE
      sql-text: ""                     # only if source: SQLTEXT
      role: DEFAULT                    # other choices: SYSDBA, SYSBACKUP, SYSDG, SYSKM, SYSASM
      restriction-level: DEFAULT       # other choices: LEVEL_1, LEVEL_2, LEVEL_3, LEVEL_4
  - vbappops-export-data:
      environment-name:                # required
      service-instance:                # required
      vb-project-id:                   # required
      vb-project-version:              # required
      access-token:                    # required if username or password is null
      username:                        # required if access-token is null
      password:                        # required if access-token is null
      app-data-file:                   # required
  - vbappops-import-data:
      environment-name:                # required
      service-instance:                # required
      vb-project-id:                   # required
      vb-project-version:              # required
      access-token:                    # required if username or password is null
      username:                        # required if access-token is null
      password:                        # required if access-token is null
      app-data-file:                   # required
  - vbappops-lock-app:
      environment-name:                # required
      service-instance:                # required
      vb-project-id:                   # required
      vb-project-version:              # required
      access-token:                    # required if username or password is null
      username:                        # required if access-token is null
      password:                        # required if access-token is null
  - vbappops-unlock-app:
      environment-name:                # required
      service-instance:                # required
      vb-project-id:                   # required
      vb-project-version:              # required
      access-token:                    # required if username or password is null
      username:                        # required if access-token is null
      password:                        # required if access-token is null
  - vbappops-undeploy-app:
      environment-name:                # required
      service-instance:                # required
      vb-project-id:                   # required
      vb-project-version:              # required
      access-token:                    # required if username or password is null
      username:                        # required if access-token is null
      password:                        # required if access-token is null
  - vbappops-rollback-app:
      environment-name:                # required
      service-instance:                # required
      vb-project-id:                   # required
      vb-project-version:              # required
      access-token:                    # required if username or password is null
      username:                        # required if access-token is null
      password:                        # required if access-token is null
  - visual-app-packaging:
      sources: "build/sources.zip"     # optional, defaults to 'build/sources.zip'
      build-artifact: "build/built-assets.zip" # optional, defaults to 'build/built-assets.zip'
      optimize: true                     # boolean
  - wercker:
      token: ""                          # required
      application: ""                    # required
      pipeline: ""                       # required
      branch: ""                         # required
      message: ""
      run-id: ""
      script: ""
      status: ""
  after:
  - artifacts:
      include: ""                     # required
      exclude: ""
      maven-artifacts: false
      include-pom: false              # ignored unless maven-artifacts: true
  - git-push:
      push-on-success: false
      merge-results: false
      tag-to-push: ""
      create-new-tag: false
      tag-remote-name: "origin"
      branch-to-push: ""
      branch-remote-name: "origin"
  - javadoc:
      javadoc-dir: "target/site/apidocs"
      retain-for-each-build: false
  - junit:
      include-junit-xml: "**/surefire-reports/*.xml"
      exclude-junit-xml: ""
      keep-long-stdio: false
      organize-by-parent: false
      fail-build-on-test-fail: false
      archive-media: true
  - sonarqube:                            # sonarqube-setup must be configured
      replace-build-status: true          # Apply SonarQube quality gate status as build status
      archive-analysis-files: false
  settings:
  - discard-old:
      days-to-keep-build: 0
      builds-to-keep: 100
      days-to-keep-artifacts: 0
      artifacts-to-keep: 20
  - versions:
      version-map:
        java: "8"       # For templates the options (with defaults wrapped in '*' chars) are
                        #   java: 7, *8*, 11, 13, G1
                        # For the Built-in (Free) Executors, the options are
                        #   java: 7, *8*, 11, or 13
                        #   Ant: LATEST 
                        #   C++: LATEST 
                        #   Firefox: LATEST 
                        #   Git: LATEST 
                        #   Maven: LATEST 
                        #   Python2: LATEST 
                        #   Ruby: LATEST 
                        #   Xvfb: LATEST 
                        #   Gradle: LATEST 
                        #   jdev: 11 
                        #   nodejs: 0.12, 8, or *10* 
                        #   python3: 3.5, or *3.6* 
                        #   soa: 12.1.3, or *12.2.1.1* 
                        #   SQLcl: LATEST 
                        #   FindBugs: LATEST
  - git-poll:
      cron-pattern: "0/30 * * * * #Every 30 minutes"
  - periodic-build:
      cron-pattern: "0/30 * * * * #Every 30 minutes"
  - abort-after:
      hours: 0
      minutes: 0
      fail-build: false
  - build-retry:
      build-retry-count: 5
      git-retry-count: 5
  - log-size:
      max: 50                     # megabytes
  - logger-timestamp:
      timestamp: true
  - quiet-period:
      seconds: 0
YAML Job Configuration Examples

Here are several examples of YAML job configurations:

Job Configuration YAML Code

This configuration creates a job that runs Maven goals and archives the artifacts:

  • Job Name: MyFirstYAMLJob
  • Job's Build Template: Basic Build VM Template
  • Git repository: employee.git
  • Maven step:
    • Goals: clean install
    • POM file: employees-app/pom.xml
  • After build action:
    • Archived artifacts: employees-app/target/*
job:
  name: MyFirstYAMLJob
  vm-template: Basic Build VM Template
  git:
  - url: "https://mydevcsinstance-mydomain/.../scm/employee.git"
  steps:
  - maven:
      goals: clean install
      pom-file: "employees-app/pom.xml"
  after:
  - artifacts:
      include: "employees-app/target/*"

This configuration creates a job to run Docker steps that log in, build, and push an image to the OCI Registry:

  • Job Name: MyDockerJob
  • Job's Build Template: Docker and Node.js Template
  • Job Description: Job to build and push a Node.js image to the OCI Registry
  • Git Repository: NodeJSMicroDocker.git
  • Docker steps:
    • Docker registry host: iad.ocir.io
    • Username: myoci/ociuser
    • Password: My123Password
    • Image name: myoci/ociuser/mynodejsimage
    • Proxy options: --build-arg https_proxy=http://my-proxy-server:80
job:
  name: MyDockerJob
  description: Job to build and push a Node.js image to OCI Registry
  vm-template: Docker and Node.js Template
  git:
  - url: "https://mydevcsinstance-mydomain/.../scm/NodeJSMicroDocker.git"
  steps:
  - docker-login:
      registry-host: "https://iad.ocir.io"
      username: "myoci/ociuser"
      password: My123Password    
  - docker-build:
      source: "DOCKERFILE"
      options: "--build-arg https_proxy=https://my-proxy-server:80"
      image:
         image-name: "myoci/ociuser/mynodejsimage"
         version-tag: "1.8"
         registry-host: "https://iad.ocir.io"
       path: "mydockerbuild/"
  - docker-push:
      image:
        registry-host: "https://iad.ocir.io"
        image-name: "myoci/ociuser/mynodejsimage"
        version-tag: "1.8"
  - docker-image:
        options: "--all" 

This configuration creates a job that uses SQLcl to run SQL commands and a script:

  • Job Name: RunSQLJob
  • Job's Build Template: Basic Build VM Template
  • SQL steps:
    • Username: dbuser
    • Password: My123Password
    • Connect string: myserver.oracle.com:1521:db1234
    • SQL commands:
      CD /home
      select * from Emp
    • SQL script file: sqlcl/simpleselect.sql
job:
  name: RunSQLJob
  vm-template: Basic Build VM Template
  steps:  
  - sqlcl:
      username: dbuser
      password: My123Password
      connect-string: "myserver.oracle.com:1521:db1234"
      sql-text: "CD /home\nselect * from Emp"
      source: "SQLTEXT"
  - sqlcl:
      username: dbuser
      password: My123Password
      connect-string: "myserver.oracle.com:1521:db1234"
      sql-file: "sqlcl/simpleselect.sql"
      source: "SQLFILE"

This configuration creates a job that runs Maven goals and archives the artifacts:

  • Job Name: MyADFApp
  • Job's Build Template: JDev and ADF Template
  • Git Repository: ADFApp.git
  • Run a build on a push update to the patchset_1 branch: Yes
  • Git repository branch: patchset_1
    • Files to track for changes: myapp/src/main/web/.*\.java
    • Files to ignore for changes: myapp/src/main/web/.*\.gif
    • Remove untracked files before running a build: Yes
    • Display the commit author in the log: Yes
  • Copy artifacts from another job: ADFDependencies
    • Artifacts: adf-dependencies.war
  • Oracle Maven Repository connection:
    • OTN username: alex.admin@example.com
    • OTN password: My123Password
  • Maven step:
    • Goals: clean install package
    • POM file: WorkBetterFaces/pom.xml
  • After build steps:
    • Artifacts to archive: WorkBetterFaces/target/*.ear
  • Other settings:
    • Java version: 7
    • Discard old builds: Yes
      • Number of builds to keep: 50
      • Number of builds to keep: 10
    • Periodic build trigger:
      • Hour: 2
      • Minutes: 30
    • Build retry count: 5
    • SCM retry count: 10
    • Abort if the build is stuck: 1 hour
job:
  name: MyADFApp
  vm-template: Basic Build VM Template
  auto:
    branch: "patchset_1"
  git:
  - url: "https://mydevcsinstance-mydomain/.../scm/ADFApp.git"
    branch: patchset_1
    build-on-commit: true
    included-regions: "myapp/src/main/web/.*\\.java"
    excluded-regions: "myapp/src/main/web/.*\\.gif"
    clean-after-checkout: true
  before:
  - copy-artifacts:
      from-job: ADFDependecies
      artifacts-to-copy: adf-dependencies.war
  - oracle-maven:
      otn-login: "alex.admin@example.com"
      otn-password: My123Password
  steps:
  - maven:
      goals: clean install package
      pom-file: "WorkBetterFaces/pom.xml"
  after:
  - artifacts:
      include: "WorkBetterFaces/target/*.ear"
  settings:
    general:
    - discard-old:
        days-to-keep-build: 50
        builds-to-keep: 10
    software:
    - versions:
        version-map:
          java: 7
    triggers:
    - git-poll:
        cron-pattern: "0/30 5 * 2 *"
    advanced:
    - abort-after:
        hours: 1
    - build-retry:
        build-retry-count: 5
        git-retry-count: 10

How Do I Use YAML to Create or Configure a Pipeline?

You can use YAML for creating a new pipeline or configuring an existing one:

  1. Clone the Git repository with the YAML file, to your computer or to the location where you want host it.
  2. Create a file with the pipeline's YAML configuration.
  3. Save the file with the .yml extension in the .ci-build directory at the root of the cloned Git repository: .ci-build/my_yaml_pipeline.yml
  4. Validate the local YAML file. See How Do I Validate a Job or Pipeline Configuration?.
    Resolve any issues, if any were reported.
  5. Commit and push the file to the project's Git repository.
  6. Open the Project Home Project Home page and, in the Recent Activities Feed, verify the notification about the YAML file. You should see notifications that the YAML file and pipeline were created.
    If there were any issues with the YAML file, a notification with a View Error link would be displayed. Click the View Error link to download a JSON file with the error messages. Review the error, update the YAML file, and commit the file again.
  7. To view the pipeline, go to the Builds page and open the Pipelines tab:
    • To run a build of the pipeline's jobs, click Build Build.
    • To view its instances, click the pipeline's name. To edit its YAML configuration, click Configure Configure.
YAML Pipeline Configuration Examples

Here are several examples of YAML pipeline configurations:

Pipeline Configuration YAML Code

This configuration shows a pipeline where Job 2 depends on Job 1 and runs after Job 1 completes successfully; Job 3, which depends on Job 2, runs after Job 2 completes successfully:

  • Pipeline name: My Pipeline
  • Description: YAML pipeline configuration
  • Start the pipeline if any job in the pipeline runs: Yes
  • Allow jobs in the pipeline to run independently while pipeline is running: Yes

Pipeline jobs:

  1. Job 1
  2. Job 2
  3. Job 3
pipeline:
  name: My Pipeline
  description: YAML pipeline configuration
  auto-start: true
  allow-external-builds: true
  start:
    - Job 1
    - Job 2
    - Job 3
    

This configuration shows a pipeline where Jobs 2, 3, and 4 depend on Job 1 and run in parallel after Job 1 completes successfully, and Job 5, which depends on Jobs 2, 3, and 4, runs after all three complete successfully:

  • Pipeline name: My Pipeline
  • Start pipeline if any job of the pipeline runs: Yes

Pipeline jobs:

  1. Job 1
    • Job 2
    • Job 3
    • Job 4
  2. Job 5
pipeline:
  name: My Pipeline
  auto-start: true
  start:
    - Job 1
    - parallel:
      - Job 2
      - Job 3
      - Job 4
    - Job 5

This configuration shows a pipeline where Job 2 runs after Job 1 completes successfully, and Jobs 3, 4, and 5, run in parallel after Job 2 completes successfully; in this configuration, Job 6 runs after Job 5 completes successfully and Job 7 runs after Jobs 6, 3, and 4 complete successfully:

Pipeline name: My Pipeline

Pipeline jobs:

  1. Job 1
  2. Job 2
    • Job 3
    • Job 4
    1. Job 5
    2. Job 6
  3. Job 7
pipeline:
  name: My Pipeline
  start:
    - Job 1
    - Job 2
    - parallel:
      - Job 3
      - Job 4
      - sequential:
        - Job 5
        - Job 6
    - Job 7

This configuration shows a pipeline where Job 2, which runs after Job 1 completes successfully, and Job 3, which runs if Job 1 fails:

Pipeline name: My Pipeline

Pipeline jobs:

  1. Job 1

    If Job 1 completes successfully:

    1. Job 2

    If Job 1 fails:

    1. Job 3
pipeline
  name: My Pipeline
  start:
    - Job 1
    - on succeed:
        - Job 2
    - on fail:
        - Job 3

This configuration shows Job 2, which runs after Job 1 completes successfully and Job 3, which runs if Job 1 completes successfully but fails tests or any post build action, or if Job 1 fails; Job 3 will not run if Job1 completes successfully:

Pipeline name: My Pipeline

  1. Job 1
    • (if Job 1 completes successfully) Job 2
    • (if Job 1 completes successfully, but tests or other post-processing fails) Job 3
  2. (if Job 1 fails) Job 3
pipeline:
  start:
    - Job 1
    - on succeed:
        - Job 2
        - on post-fail:
            - Job 3
    - on fail:
        - Job 3
Set Dependency Conditions in Pipelines Using YAML

When you create a pipeline that includes a dependency between a parent and a child job, by default, the build of the child job will run after the parent job’s build completes successfully. You can configure the dependency to run a build of the child job after the parent job’s build fails too, either by using the pipeline designer or by setting an "on condition" in YAML to configure the result condition.

The pipeline designer supports Successful, Failed, or Test Failed conditions (see Configure the Dependency Condition). YAML supports additional conditions you can use. Here they are, with the build results they are mapped to:

  • "succeed" and "success" map to a "SUCCESSFUL" build result
  • "fail" and "failure" map to a "FAILED" build result
  • "test-fail" and "post-fail" map to a "POSTFAILED" build result

None of these conditions match when a job is aborted, canceled, restarted, etc., so the dependency condition is never assessed and the pipeline doesn't proceed under the conditions, in these circumstances.

See YAML Pipeline Configuration Examples to learn more about using and setting some of these dependency conditions in YAML. The fourth example shows how to use the "on succeed" and "on fail" settings. The fifth example shows how to use the "on succeed", "on fail", and "on post-fail" settings.

You can use the new public API to view the pipeline instance log to see what happened with the builds in the pipeline, after the fact. Use this format to get the log:

GET pipelines/{pipelineName}/instances/{instanceId}/log