This chapter includes the following sections:
The SSH Node Manager is a shell script,
wlscontrol.sh, located in
wlscontrol.sh file must exist on each machine that hosts server instances that you want to control with Node Manager. This script can be customized to meet site-specific requirements.
You must have an SSH client executable on each machine where Node Manager or a Node Manager client runs. This script must also be in the path of the user-id running it. Typically, an SSH client is a standard part of a UNIX or Linux installation.
Before running Node Manager, you should create a dedicated UNIX user account for performing Node Manager functions. Add this user to all machines that will host the SSH Node Manager and to all machines that will host a Node Manager client, including the Administration Server.
On UNIX platforms, Oracle does not recommend running Node Manager as the root user. However, to achieve Post-Bind GID, you must start Node Manager as the root user. Post-Bind GID enables a server running on your machine to bind to a UNIX group ID (GID) after it finishes all privileged startup actions.
On each host machine, as the root user, create two new operating system (OS) users: bea and ndmgr, both associated with a new group called bea.
Use bea for installing WebLogic Server only.
Use ndmgr to create a WebLogic domain and start the Administration Server and remote Managed Servers using Node Manager.
Both OS users should have the same OS group (bea) to ensure that the correct permissions are in place for ndmgr to run WebLogic scripts and executables.
> groupadd bea
> useradd -g bea -m bea
> passwd bea
> useradd -g bea -m ndmgr
> passwd ndmgr
The Node Manager SSH shell script relies on SSH user-based security to provide a secure trust relationship between users on different machines. Authentication is not required. You create a UNIX user account—typically one per domain—for running Node Manager commands and scripts. A user logged in as this user can issue Node Manager commands without providing a user name and password.
You must also ensure that the Node Manager and WebLogic Server commands are available in the path of the UNIX user-id used to run them. Change the environment file of the user to contain the path to
This file resides in
Configure SSH trust between the ndmgr user on each machine that will run a WebLogic Server instance and the same ndmgr user on the same machine, plus the corresponding ndmgr user on every other machine.
In other words, any ndmgr user on one machine must be able to establish an SSH session without being prompted for security credentials, with a ndmgr user of the same name on the same or a different machine. A ndmgr user must also be able to establish an SSH session with itself without being prompted for security credentials. This is necessary because any Managed Server can become the cluster master for migratable servers and issue commands to start other remote Managed Servers in the cluster using SSH. For Managed Server migration to work, the ndmgr user needs only to be able to run the
wlscontrol.sh script using SSH. For more information, see Configuring Security for WebLogic Server Scripts.
For example, to configure one instance of a user to trust another instance of a user for SSH version2:
From a terminal logged in as ndmgr user:
> ssh-keygen -t dsa
When prompted, accept the default locations and press Enter for passphrase so that no passphrase is specified.
Copy the ndmgr user's public key to the ndmgr user's home on the same machine and all other machines.
> scp .ssh/id_dsa.pub email@example.com:./
Establish an SSH session with the target machine as the ndmgr user and set up trust for the remote ndmgr user.
> ssh -l ndmgr 192.168.1.101 (you should be prompted for password)
> mkdir .ssh
> chmod 700 .ssh
> touch .ssh/authorized_keys2
> chmod 700 .ssh/authorized_keys2
> cat id_dsa.pub >> .ssh/authorized_keys2
> rm id_dsa.pub
Test that you can establish an SSH session with the ndmgr user on the remote machine without requiring a password.
> ssh -l ndmgr 192.168.1.101
Repeat this process for all combinations of machines.
Alternatively, you can achieve the same result by generating a key value pair on each machine, concatenating all of the public keys into an
authorized_keys2 file, and copying (
scp) that file to all machines. Try establishing SSH sessions between all combinations of machines to ensure that the
~/.ssh/known_hosts files are correctly configured. For more information, see Generating and Distributing Key Value Pairs.
As the bea user, install a WebLogic Server instance in the base directory,
/opt/bea/wlserver, on all the machines that will run WebLogic Server.
> ./ wls_generic.jar
In the ndmgr user's home directory, create a WebLogic domain on the machine which will host the Administration Server only.
Subsequently, when you start the Administration Server, it will use the configuration in the
config subdirectory of this domain directory to determine the settings for the Administration Server and the domain.
It is likely that most Managed Server instances will be run remotely with respect to the Administration Server. Therefore, these Managed Servers will not have direct access to the domain configuration directory of the Administration Server. Instead they will run from a skeleton domain directory in their respective machine's ndmgr home directory and will obtain their configuration over the network on startup from the remotely running Administration Server.
As the ndmgr user, create the WebLogic domain.
Run the Configuration Wizard:
Create a new WebLogic domain based on the default WebLogic Server template.
For the Administration Server, specify a fixed IP address (for example,
In Customize Environment and Service Settings, select Yes.
In Configure Managed Servers, add two Managed Servers,
For the Managed Servers, specify floating IP addresses (for example,
In Configure Clusters, add a cluster,
CLUST, and then assign
MS2 to it.
Do not specify any Machines or UNIX Machines; you will do this manually in a subsequent step.
Name the domain
clustdomain and save it to
As the ndmgr user, start the Administration Server locally from a terminal using the
wlscontrol.sh Node Manager script.
> /opt/bea/wlserver/common/bin/wlscontrol.sh -d clustdomain -r /opt/bea/clustdomain -c -f startWebLogic.sh -s AdminServer START
For verbose logging to standard out, add the
-x parameter to the command.
Once successfully started, stop the Administration Server and then start it remotely using SSH.
> ssh -l ndmgr -o PasswordAuthentication=no %p 22 192.168.1.100 /opt/bea/wlserver_ 103/common/bin/wlscontrol.sh -d clustdomain -r /home/ndmgr/clustdomain -c -f startWebLogic.sh -s AdminServer START
Each machine that will host a Managed Server will have a skeleton domain created and configured.
From a local terminal, create a new empty directory (
clustdomain) in the home directory for the ndmgr user for each of the Managed Server host machines and also a back-up machine. For example:
> mkdir clustdomain
For each of the Managed Server host machines and the back-up machine, as the ndmgr user, use WLST to enroll the user's home directory as being the base directory for remotely run servers and for Node Manager.
Be sure to run
nmEnroll on each remote machine. This command creates a property file,
/home/ndmgr/nodemanager.domains, which maps domain names to home directories, and creates the required domain configuration and security information so that Managed Servers can communicate with the Administration Server.
nodemanager.domains file removes the need to specify the domain home directory (with
-r) when starting
wlscontrol.sh. However, since you changed the Node Manager home directory, you must specify
-n /home/ndmgr. The default Node Manager home directory is
/opt/bea/wlserver/common/nodemanager; you might not want to use this directory as it is in the product installation directory and owned by another user.
By default, you can start a Node Manager from any directory. A warning will be issued if no
nodemanager.domains file is found. You must create or copy into
nodemanager.domains file that specifies the domains that you want a Node Manager instance to control or register WebLogic domains using the WLST command,
Create a WebLogic script directory (
bin) in Node Manager's new domain home.
> mkdir ~/clustdomain/bin
Copy the scripts from the Administration Server's domain
bin directory to the corresponding domain
bin directory on each Node Manager machine (for example,
/home/ndmgr/bin). For example:
> scp firstname.lastname@example.org: ~/clustdomain/bin/* email@example.com:~/clustdomain/bin
For each Node Manager machine (including the back-up machine), edit the shell scripts in the
bin directory to reflect the proper path for the local domain home, and the remote Administration Server's IP address.
DOMAIN_HOME variables in the
setDomainEnv.sh script to correctly reflect this remote domain home directory:
Similarly, edit the
DOMAIN_HOME variable in
ADMIN_URL (for example,
t3://192.168.1.100:7001) variables in
For each of the Managed Server host machines (including the back-up machine), as the ndmgr user, create a
server/security subdirectory in the domain directory.
For example, for the Managed Server
> mkdir -p ~/clustdomain/servers/MS1/security
For the back-up machine, create a server directory for every migratable Managed Server (for example, both
Create a new
boot.properties file with the appropriate user name and password variables specified in each Managed Server's
security directory (for example,
When a Managed Server is first started using the script-based Node Manager, the values in this file will be encrypted.
For each of the Managed Server machines, as the ndmgr user, start the Managed Server locally from a terminal using the
wlscontrol.sh Node Manager script.
For example, to start the Managed Server,
> opt/bea/wlserver/common/bin/wlscontrol.sh -d clustdomain -n /home/ndmgr -c -f startManagedWebLogic.sh -s MS1 START
For verbose logging to standard out, add the
-x parameter to the command.
Once successfully started, stop the Managed Servers and then, as the ndmgr user, attempt to start the Managed Servers remotely using SSH.
For example to start
> ssh -l ndmgr -o PasswordAuthentication=no -p 22 192.168.1.101 /opt/bea/wlserver/common/bin/wlscontrol.sh -d clustdomain -n /home/ndmgr -c -f startManagedWebLogic.sh -s MS1 START
Once successfully started, stop the Managed Servers again and then repeat the process by trying to start each Managed Server (
MS1) on the back-up machine instead. Again, stop this server once it successfully starts.
Using the Administration Console, add a new UNIX Machine for each machine which will host an Administration or Managed Server (including the back-up machine) and include the following settings:
Node Manager Type
Node Manager Listen Address
Node Manager Listen Port
Node Manager Home
Node Manager Shell Command
ssh -l ndmgr -o PasswordAuthentication=no -p %P %H /opt/bea/wlserver/common/bin/wlscontrol.sh -d %D -n /home/ndmgr -c -f startManagedWebLogic.sh -s %S %C
Node Manager Debug Enabled
Once all of the UNIX Machines are created, use the Administration Console to set the Machine property for each server, to ensure each server is associated with its corresponding UNIX Machine. See "Assign server instances to machines" in the Oracle WebLogic Server Administration Console Online Help.
In the Administration Console, start each Managed Server. See "Start Managed Servers from the Administration Console" in the Oracle WebLogic Server Administration Console Online Help.
Check the server logs in the
/logs directory of each Managed Server to ensure that the server has started with no errors.
The default SSH port used by Node Manager is 22. You can override that setting in the following ways:
Port= parameter in the
~/.ssh/config file to set the default port for an individual user.
Port= parameter in the
/etc/ssh_config file to set the default port across the entire system.
Start the Administration Server using the following system property:
-Dweblogic.nodemanager.ShellCommand="ssh -o PasswordAuthentication=no -p %P %H wlscontrol.sh -d %D -r %R -s %S %C"
After starting the server, you can edit the SSH port in the Administration Server's configuration file.
To perform server migration and other tasks, the user-id executing scripts such as
wlscontrol.sh must have sufficient security permissions. This includes being able to bring an IP address online or take an IP address offline using a network interface.
Server migration is performed by the cluster master when it detects that a server has failed. It then uses SSH to launch a script on the target machine to begin the migration. The script on the target machine runs as the same user ID running the server on the cluster master.
The commands required to perform server migration are
arping. Since these scripts require elevated OS privileges, it is important to note that this can prevent a potential security hole. Using sudo, you can configure your SSH to only allow
arping to be run using elevated privileges.
The scripts are located in the
bin/ directory or the
server_migration directory. See Step 2: Configure Node Manager Security.
A remote start user name and password is required to start a server instance with Node Manager. These credentials are provided differently for Administration Servers and Managed Servers.
Credentials for Managed Servers—When you invoke Node Manager to start a Managed Server, it obtains its remote start name and password from the Administration Server.
Credentials for Administration Servers—When you invoke Node Manager to start an Administration Server, the remote start user name can be provided on the command line, or obtained from the Administration Server's
boot.properties file. The Configuration Wizard initializes the
boot.properties file and the
startup.properties file for an Administration Server when you create the domain.
Any server instance started by Node Manager encrypts and saves the credentials with which it started in a server-specific
boot.properties file, for use in automatic restarts.
The script-based Node Manager uses two types of key value pairs. This section contains instructions for distributing key value pairs to the machines that will host a Node Manager client or server.
This option distributes the same key value pair to all machines that will host a Node Manager client or server.
The simplest way to accomplish this is to set up your LAN to mount the Node Manager user home directory on each of the machines. This makes the key value pair available to the machines. Otherwise:
Generate an RSA key value pair for the user with the
ssh-keygen command provided with your SSH installation.
The default location for the private and public keys are
If these keys are stored in a different location, modify the
ShellCommand template, adding an option to the
ssh command to specify the location of the keys.
Append the public key to the
~/.ssh/authorized_keys file on the Node Manager machine.
command="/home/bea/server90/common/nodemanager/nodemanager.sh" 1024 33 23...2323
in which the you substitute the public key that you generated, as stored in
id_rsa.pub, for the string shown in the example as
1024 33 23...2323
command ensures that a user that establishes a session with the machine using the public key can only run the command specified—
nodemanager.sh. This ensures that the user can only perform Node Manager functions, and prevents unauthorized access to data, system utilities, or other resources on the machine.
Manually distribute the key value pair to each machine that will host a Node Manager server instance or client.
Execute the following command on the client machine to check that the Node Manager client can access Node Manager:
/home/bea$ ssh montgomery VERSION
This response indicates that the client accessed Node Manager successfully:
+OK NodeManager v9.1.0
On each machine that will host a Node Manager client:
Generate a separate RSA key value pair for the Node Manager user as described in step one in the previous section.
Append the public key to the machine's
~/.ssh/authorized_keys file user as described in step two in the previous section.