2 Planning Your ASAP Installation

This chapter describes the hardware, operating system, software, server, and database requirements for installing Oracle Communications ASAP.

About Planning Your ASAP Installation

ASAP is comprised of a series of applications, each with its own database schema which are installed on an Oracle WebLogic Server domain. ASAP connects with an Oracle database to store all relevant information.

About Test Systems and Production Systems

Create test systems to support the following activities:

  • Familiarize yourself with ASAP functionality.

  • Investigate the ASAP Server implementation size for your production system.

  • Determine the ASAP server deployment options for your production system.

  • Determine your ASAP OCA client deployment and sizing options for your production system.

  • Determine the number and size of your Oracle Database tablespaces that your production system will require.

  • Determine the memory requirements for the individual ASAP server schemas.

  • Test WebLogic Server functionality and deployment options for test and production environments.

  • Develop new network activation (NA) cartridges or service activation (SA) network cartridges, or integrate and customize pre-existing ASAP cartridges.

  • Investigate and implement possible ASAP customization requirements for the ASAP SRP or NEP.

Create production systems after fully testing and integrating ASAP functionality according to your network requirements.

Types of Implementations

This section provides details on ASAP server and client implementations.

ASAP Server Implementation

This section provides details on ASAP server implementation size classifications, memory, and ASAP server disk space requirements.

ASAP Server Implementation Size

The ASAP implementation size classifications provided in this section are for approximate sizing purposes and align with the pre-tuned default configurations included with new ASAP installations. Your implementation requirements may vary.

The ASAP pre-tuned configurations are classified as small, medium, or large. These categories are defined based on the following factors:

  • Number and complexity of incoming requests (per day)

  • Number of network elements (NEs) that ASAP interfaces with

  • Average completion time of a request

The complexity of any custom code extensions can also affect ASAP performance and size requirements. For complete details on pre-tuned ASAP system configuration, see the appendix on tuning ASAP in the ASAP System Administrator's Guide.

As a general rule, the number of incoming requests per day determines the configuration classifications (see Table 2-1).

Table 2-1 ASAP Implementation Size Classifications

Implementation Size Number of Orders per Second

Small

1 order per second, up to 50,000 per day

Medium

Up to 10 orders per second, up to 500,000 per day

Large

10 to 20+ orders per second, 500,000 to 1M+ orders per day


Large configurations generally use multiple territories and instances of ASAP. Each territory can be a distributed configuration. A specific ASAP implementation can have many territories. The installations are independent because there is no communication between ASAP systems in different territories.

ASAP Server Memory

Table 2-2 lists example memory requirements for ASAP servers on UNIX running in an Oracle virtual machine. These example memory requirements also apply to other operating systems supported by ASAP (see "Hardware Requirements" for supported operating systems).

Table 2-2 ASAP Server Memory Requirements

Application Memory Requirements Description

SARM Server

50 MB (small implementation)

55 MB (medium implementation)

65 MB (large implementation)

The amount of memory required for the SARM depends on:

  • Number of concurrent work orders (WOs) being processed

  • Complexity of ASAP service modeling based on the number of Common Service Description Layer (CSDL) commands per work order and the number of Atomic Service Description Layer (ASDL) commands per CSDL

  • Number of target network elements

  • Number of service request processors (SRPs) for event notification

  • Internal resource for SARM configuration (for example, the number of threads)

Admin Server

25 MB

The Admin Server memory requirement.

Control Server

10 to 15 MB

The Control Server memory requirement.

Java SRP Server

256 MB

The Java SRP is managed through WebLogic Server and requires a minimum of 256 MB in production.

C/C++ SRP Server

6 to 30 MB

Because you can customize the SRP, the memory can also be used for:

  • Static cached configuration information

  • Dynamic data structures of work orders in progress

NEP Server – asc_nep

40 MB (small implementation)

45 MB (medium implementation)

40 MB on each server (large implementation)

The NEP memory usage depends on:

  • Number of NEs managed by the NEP

  • Number of communication devices used by the NEP

  • State Table cache within the NEP

NEP server – Java process

128 MB

Minimum production size


The Oracle database server also has memory requirements. For sizing guidelines, consult the Oracle Database documentation.

ASAP Disk Space

The disk space requirements for test and production system differ. These space requirements described in the following sections are for the file systems required by the ASAP installer only and do not account for additional software (for example, the Oracle Database or WebLogic Server).

You must ensure that you have enough disk space, appropriately allocated, before beginning the installation process.

Test System Disk Space

Test systems have simple disk space requirements because they do not have to accommodate a production work order stream or a full array of network element mappings. Production systems have more complex disk space requirements than test systems.

Test systems are used to build and test support for network elements and build automated interfaces into ASAP for upstream systems. Ensure that you have enough disk space available on your machines for all of the requirements as listed in Table 2-3.

Table 2-3 ASAP Component Requirements

Components Disk Space

ASAP Installer TAR file size by platform

Solaris: 372 MB

Oracle Linux: 358 MB

AIX: 625 MB

ASAP core components size by platform

Solaris: 693 MB Foot 1 

Oracle Linux: 607 MB

AIX: 655 MB

ASAP logs

100 MB


Footnote 1 The ASAP Installer for Solaris also provides optional X2X (for the X.25 and X.29 protocols) and SNMP component support for External Device Drivers (EDD) functionality. This EDD functionality has been deprecated.

Production System Disk Space

Production systems are used against live network elements to process real work orders.

The sizing of production systems depends on variables such as daily transaction volume and amount of historical data to be maintained. If your production environment generates many log files due to high volumes of work orders, allocate additional space. Consult with your UNIX administrator for assistance.

Disk I/O speed can be a factor in performance, especially if many log entries are being written. Oracle recommends that you distribute your database and index segments over multiple disks as described in "ASAP Server Hardware Requirements" for small, medium, and large system for increased performance.

For detailed information on tuning ASAP, see ASAP System Administrator's Guide.

Determining the Number of Network Elements per Network Element Processor

The number of Network Elements (NEs) per Network Element Processor (NEP) required is dependent on:

  • Whether the NE interface is asynchronous or synchronous.

  • Whether the NE can support independent concurrent connections.

  • The speed of the NE interface (for example, a slow MML response time or fast EMS interface).

  • The number of requests to that NE per day. This is usually expressed in terms of work orders per day although a more accurate measure is the number of ASDL commands per day.

  • The complexity of custom code extensions, State Tables, or Java code.

For more information about adding NEP servers, see the ASAP Server Configuration Guide. For more information about mapping NEs to NEP servers, see the ASAP Cartridge Development Guide.

Table 2-4 shows NEs classified according to their anticipated loads and provides a rough estimate of the number of NEs typically allocated to a single NEP.

Table 2-4 Suggested NE to NEP Ratios

NE Classification Number of Orders per Day per NE Average Number of NEs per NEP

Idle

Fewer than 10

30 to 100

Normal

Between 10 and 100

15 to 30

Busy

More than 100

5 to 15


The number of NEs that can be managed by a particular NEP is limited only by available system resources. The NEP allocates threads to manage each NE and device used to interface to the NEs. For example, an NEP managing 20 NEs where each NE has two dedicated connections would require 20 + (2 x 20) = 60 threads within the NEP.

Table 2-5 shows the typical memory requirements for NEPs managing different numbers of NEs, excluding State Table caches.

Table 2-5 Memory Requirements per NEP

NEs per NEP Approximate Memory Requirements

0

7 MB

10

10 MB

20

15 MB

50

20 to 35 MB

500

50 to 70 MB


Note:

The example of 500 NEs per NEP is provided for reference purposes only. Your configuration may differ.

Table 2-6 shows an example for a client with 200 NEs.

Table 2-6 NE/NEP Configuration Example

NE Classification Number of NEs NEs per NEP Number of NEPs Required Memory (MB) Cumulative Number of NEPs Cumulative Memory (MB)

Idle

200

50

4

4 x 50 = 200

4

200

Normal

200

20

10

10 x 15 = 150

14

350

Busy

100

10

10

10 x 10 = 100

24

450

Totals

400

     

24

450


Order Control Application Implementation

This section provides details about the ASAP OCA client implementation size and NEP to NE distribution requirements.

Order Control Application Implementation Size

The OCA is a client application used for system administration and fallout management. You can deploy OCA to support a small, medium, or large number of concurrent users. The deployment details for setting up OCA are dependent on the following factors:

  • Number of OCA clients installed, the geographic locations to support, and the number of concurrent sessions to be run.

  • Number of orders to be queried and fixed per day. For example, if ASAP handles over 500 000 work order per day, and a small percentage of those work orders fail (for reasons such as incorrectly configured work orders, network connection problems, and so on) you may need a larger amount of concurrent OCA sessions than in a smaller ASAP implementation.

System Deployment Planning

This section provides details about:

ASAP Server Process Deployment Options

Install the ASAP environment on one or more UNIX machines. These UNIX machines run the ASAP server processes (with the exception of the JSRP that runs in the WebLogic server). The ASAP server processes can be:

Deploying ASAP to One UNIX Machine

When you install ASAP on one machine, ASAP must be able to access an Oracle database instance and WebLogic Server instance. These applications can be co-resident with the ASAP environment, or located on different machines.

A typical configuration consists of ASAP co-resident with the WebLogic Server instance connected to a remote Oracle database instance.

Deploying ASAP Over Several UNIX Machines

In a distributed environment, a control server must be present on each computer that runs an ASAP server application (such as SRP, SARM, NEP, and ADMIN servers). One of these Control Servers is designated the master while the rest are designated as remote servers. The primary Control Server manages all ASAP applications centrally. There can be many secondary Control Servers managing individual machines, each receiving application startup and shutdown requests from the primary server.

Figure 2-1 shows a sample ASAP system distribution with four networked machines.

In Figure 2-1:

  • Machine A – Hosts the primary Control Server and the SRP.

  • Machine B – Hosts a secondary Control Server and the SARM.

  • Machine C – Hosts another secondary Control Server and two NEPs.

  • Machine D – Hosts the database server.

ASAP components (for example, SRP, SARM, and NEP) can be distributed over several machines.

Note:

These machines do not need to be from the same vendor.

Figure 2-1 Sample ASAP Distribution

Description of Figure 2-1 follows
Description of ''Figure 2-1 Sample ASAP Distribution''

Oracle Database Deployment Options Supported by ASAP

Each ASAP server process has a database schema. The ASAP server schemas reside in a single or an Oracle Real Application Clusters (RAC) database instance and associated to one or more tablespace.

ASAP can be:

Deploying ASAP to a Single Database

You can deploy ASAP to a single Database instance. This database can be co-resident with ASAP or located on a remote machine.

Deploying ASAP to a Real Application Clusters Database

You can enhance Oracle Communications ASAP reliability using the ASAP server configurations and Oracle RAC database. With these added configurations, the ASAP server does not shut down if it loses connection with the database. The ASAP behavior is same for a connection failure during initial connection and during normal operation.

In ASAP installations with a single database instance and an Oracle RAC system, ASAP servers can be configured to wait for a specific period of time before the connection to the database is lost. You can also configure the number of attempts to establish the database connection and the interval between attempts. When the ASAP server reconnects, it uses transparent application failover (TAF) to reconnect to a preconfigured secondary instance or to the same instance of a single instance database. It creates a new connection identical to the connection established on the original instance. The connection properties are the same as the original connection.

Oracle WebLogic Server Deployment Options Supported by ASAP

Several ASAP functions deploy to a WebLogic Server instance. ASAP can be:

Deploying ASAP to a WebLogic Administration Server

For development environments, Oracle recommends that you deploy ASAP to a single WebLogic administration server.

Deploying ASAP to a WebLogic Managed Server

For production environments, Oracle recommends that you deploy ASAP to a WebLogic Server instance with an administration server and a managed server.

ASAP Reliability Deployment Planning

There are many configuration solutions to address availability and the right configuration option is often a balance between recovery time requirements and solution costs. System unavailability and downtime can occur for various reasons with most people associating downtime with hardware failure. While hardware failure is a contributing factor to system unavailability, other factors include network outages, software error and human error.

ASAP was designed for maximum system availability and application resiliency to various failure conditions. The ASAP control daemon (Control Server) process monitors other ASAP processes and restarts them if they fail.

To protect against network or disk failures, ASAP may be deployed using the following subsystem components:

  • Mirrored dual-ported data disks to protect the application from loss of critical configuration data

  • Backup or redundant network interfaces to assure network connectivity to application clients

  • Backup or redundant networks to assure network connectivity to application clients

  • Backup or redundant Power Distribution Units to guard against system power outages

ASAP is also certified for Oracle RAC to protect against database server failures. The ASAP Control Server monitors the connection to the Oracle Database and maintains database connectivity in the event of an Oracle RAC database node failover.

In addition to these considerations, ASAP provides the following availability recommendations:

Using ASAP Distributed Architecture to Insure NEP Availability

To protect against a single point of failure, the ASAP network element processors (NEPs), which manage the interactions to network elements, can be deployed in a distributed manner. Distributing the NEPs allows you to manage your entire network and also provides a level of redundancy.

You can apply an ASAP cartridge to both the remote and local NEP server. If the active NEP server fails, update the NEP mapping to point to the backup remote NEP server. See "Installing ASAP Across Multiple UNIX Machines" for details on how to distribute the NEPs.

Configuring a Cold Standby ASAP Server

To offer additional reliability in the event of a hardware failure on the server running the ASAP SARM, ASAP may be deployed in a cold standby environment. A cold standby environment refers to a type of availability solution that allows ASAP to run on one server at a time (active/passive). The active server processes the orders while the standby server is installed with the same configuration as the active server.

Implementations of cold standby ASAP servers have been accomplished by systems integrators with solutions that are tailored to the customers' needs.

Note:

ASAP supports an active/passive cold standby deployment configuration. ASAP does not support active/passive warm or hot standby deployment configurations, or an active/active deployment configuration.

Configuring ASAP Clusters

For high availability functionality customers may run ASAP in a clustered environment using Oracle Solaris cluster or third-party software. High availability using clustering is non-native to ASAP. ASAP does not support deployment into a clustered WebLogic Server environment using WebLogic Clustering support.

ASAP can be deployed for active-passive high availability using Oracle Solaris Cluster Data Service for Oracle Communications ASAP. See Oracle Solaris Cluster Data Service for Oracle Communications ASAP Guide, which is part of the Oracle Solaris Cluster documentation, for more information about this configuration.

Implementations of ASAP using third-party high availability clustering software have been accomplished by systems integrators with solutions that are tailored to the customers' needs. ASAP has been deployed for active passive high availability using products like Veritas.

Database and Client Planning

The ASAP installer uses tablespaces to create data and index segments for ASAP server schemas.

For test environments, the ASAP installer can create data and index segments for all server schemas within a single Oracle Database tablespace. In production environments, more complex configurations may be required to enhance performance. For example, you may need a separate tablespace for the SARM data and index segments, or separate tablespaces for all the ASAP servers data and index segments. You must create the required tablespaces prior to the installation of the ASAP Server software.

The ASAP server data stored in the Oracle Database tablespaces implements the following schemas:

  • Service Activation Request Manager Server Schema

    The SARM tablespace tables contain a large amount of dynamic information about past, present, and future ASAP requests.

  • Control Server Schema

    Two types of data are generated dynamically: performance data and event logs. The amount of performance data generated is configurable by modifying the time interval of sampling. If the time interval of generating data is two hours or more, and the performance data and event logs are purged every two days, 40 MB is sufficient.

    For information about modifying the time interval of sampling, see ASAP System Administrator's Guide.

  • Admin Server Schema

    This schema contains work order performance information, such as how long it takes to complete a CSDL and ASDL, how many CSDLs are processed, and the ASDL queue size for each NE.

  • Service Request Processor Server Schema

    The SRP schema is used for development testing. This schema can be used to store work order templates for use with the SRP emulator. Custom SRPs can also make use of this table for implementing custom transactions.

  • Network Element Processor Server Schema

    This optional schema stores static data, the amount of which will not grow.

For more information on Oracle Database versions supported for this release, see "Software Requirements."

ASAP Oracle Database Tablespace Sizing Requirements

Use the information in this section to plan how you will create and configure the Oracle Database tablespaces required for the ASAP server software.

Recommended Tablespace and Redo Log Sizes for ASAP

Table 2-7 lists the recommended tablespace and redo log sizes for test environments and large production environments.

Table 2-7 Recommended Tablespace and Redo Log Sizes for ASAP

Environment Recommended Size

Test

5 GB for the entire database. Each individual ASAP test environment requires at least 75 MB. The actual size of these tablespaces and redo log files depends on the number of testing environments expected on each Oracle Database instance.

Production

Small: 36 GB

Medium: 73 GB

The actual disk space usage is dependent on the size of logs and completed order retention as well as order size.

Create three 1 to 5 GB redo log files on three separate drives.


Note:

Your Oracle DBA must create these tablespaces before installing ASAP. While it is possible to create individual tablespaces for each environment, you can combine the tablespace requirements for many test environments into fewer and larger tablespaces.

Suggested Data to Index Segment Size Ratio

Although data and index segments grow at different rates, Oracle recommends that there should be a one to one size ratio between your data and index segments.

More Detailed Database Size Estimates

Table 2-8 should be used only as a guideline to determine your tablespace sizing.

Table 2-8 Sample ASAP Tablespace Sizing Requirements

Database Space (KB/WO) Description

SARM

8 to 20 KB per WO

The SARM tablespace tables contain a large amount of dynamic information about past, present, and future ASAP requests. The data size per work order can remain uniform from client to client, however sizes have become increasingly varied, specifically with next generation services.

The maintenance of extensive NE history information increases the average work order size in the tablespace. Also, the data gathered from the NEs (for example, on queries) can be substantial and change from client to client, depending on business rules, OSS infrastructure, and so on.

The example configuration provided is for a medium range ratio of KB per work order. For a medium-sized telco with 50,000 work orders per day, the SARM tablespace requirements can be calculated as follows:

  • Total number of work orders per day: 50,000

  • Total reserved space: 700,000 KB

  • Total space per work order: 14 KB

  • Data and log space per work order: 10 KB

  • Index space per work order: 4 KB

For more details about estimating tablespace sizing requirements see "Sample Service Activation Request Manager Tablespace Requirements."

CTRL

Not applicable per work order

The CTRL (Control Server) tablespace tables maintain dynamic information related to:

  • Application process performance

  • System events and alarms

This dynamic information can be subject to different archiving and purging policies. Therefore, if the system configuration generates a large number of system events and alarms and has a long purge interval, more space can be required.

The CTRL tablespace tables do not contain any dynamic information related to the work order volume.

ADM

Not applicable per work order

The ADM (Admin) server tablespace tables maintain the following statistical information on ASAP processing:

  • Work order statistics

  • CSDL statistics

  • ASDL statistics

  • NE statistics

  • ASDL/NE statistics

The size of these tables depends on the following factors:

  • Poll period between the retrieval of statistical information by the Admin Server from the SARM

  • Number of CSDLs, ASDLs, and NEs in the system

  • Archive and purge interval of the data in these tables. These intervals can be quite large as the data can be used for reporting purposes.

SRP

2 to 10 KB per WO

The SRP schema is customizable per SRP; therefore, it is difficult to provide an accurate space estimate. The SRP schema is purely dependent on any custom tables defined at a client installation.

The following sample configuration is for a low to medium complexity SRP that does not maintain extensive information about work orders in its schema. For a medium-sized telco with 50,000 work orders per day, the SRP disk space sizing can be the calculated as follows:

  • Total number of work orders per day: 50,000

  • Total reserved space: 225,000 KB

  • Total space per work order: 4.5 KB

  • Data and log space per work order: 3.0 KB

  • Index space per work order: 1.5 KB

NEP

0 to 5 KB per WO

As the NEP tablespace tables maintain only static information, not dynamic, they are usually quite small. If dynamic information is generated and maintained in the NEP tablespace tables, then this space requirement can increase. The NEP data requirements are approximately 10 to 20 MB.


Sample Service Activation Request Manager Tablespace Requirements

This section contains sample SARM database sizes.

The average work order size is an important factor in determining the tablespace size requirements. Oracle recommends that you have a rough estimate of the sizing requirements on the ASAP application databases.

Note:

Depending on any additional tables that you have created, the SRP database size can be proportional to the SARM database size. This can also be an important consideration in determining a tablespace size.

Table 2-9 contains the results of a test suite of 80 work orders processed by ASAP to produce rough SARM sizing requirements.

Table 2-9 SARM Work Order Sizing Estimates

Name Row Total Reserved Data Segment Size Index Segment Size Unused

tbl_asap_stats

0

80 KB

2 KB

8 KB

70 KB

tbl_asdl_log

311

62 KB

42 KB

2 KB

18 KB

tbl_info_parm

88

32 KB

10 KB

10 KB

12 KB

tbl_srq_csdl

191

32 KB

10 KB

6 KB

16 KB

tbl_srq_log

3482

526 KB

496 KB

8 KB

22 KB

tbl_srq_parm

4782

368 KB

142 KB

208 KB

18 KB

tbl_wrk_ord

80

48 KB

14 KB

4 KB

30 KB


Note:

Estimates for tbl_info_parm assume that limited information is brought back from the NE and stored in the SARM. If queries are performed, the tablespace needs to be sized accordingly.

The work order details that produced the results for of the dynamic SARM tables listed in Table 2-9 are specified below:

  • 80 work orders

  • 2.4 CSDLs per work order

  • 3.9 ASDLs per work order

  • 60 parameters per work order

  • 44 work order log entries per work order

For such orders, the size breakdown is:

  • Total data size: 726 KB

  • Total index size: Consider the index segment size as approximately equal to the data index size. Index size may vary depending on logging levels, operations, and so on

  • Size of switch history log data within the log data is approximately 400 KB, index size is 6 KB.

This implies the space required for each work order, including the switch history data is 12 KB/work order.

Without any switch history, these values are reduced to approximately 8 KB.

Note:

These figures indicate that for simple residential orders, an average of 12 KB per work order is required in the SARM database. This estimate assumes fairly conservative switch history for each work order. Larger volumes of switch history have an impact on this estimate. In addition, the data requirements for the SRP database are not specified here because the SRP is specific to each client site.

Estimating Service Activation Request Manager Tablespace Size Requirements

Most of the space in the tablespace is required for data generated by the SARM during provisioning. The following estimations are used for tablespace sizing:

  • The average work order (or a transaction) takes 15KB.

  • Add or deduct 5KB for a bigger or smaller work orders.

As an example, consider a system that processes 10,000 immediate work orders/day, and has an average of 2,000 future-dated work orders. Completed work orders are purged every three days, and failed work orders are handled promptly.

The SARM schema sizing can be calculated follows:

  • 10,000 x 15KB=150MB

  • 2,000 x 15KB=30MB

  • for 3 days: 150 x 3 + 30 = 480MB

  • add about 15% on the top for peak time and static data: 480 x 1.15 = 552MB

  • take 550MB

    Note:

    Some orders, such as those that query NEs, can generate a large amount of data that requires additional space per order to store in the SARM.

Java Service Request Processor Tablespace Requirement

Most of the JSRP data is incorporated into the SARM tablespace. See "Sample Service Activation Request Manager Tablespace Requirements."

Oracle Client

The Oracle Client is required for ASAP components to communicate with the Oracle Database. This client must be installed and configured before you install ASAP.

For more information about the Oracle Client versions supported for this release of ASAP, see "Software Requirements."

WebLogic Server Planning

This section provides details on WebLogic Server domain configurations supported by ASAP.

WebLogic Server Domain Options

ASAP supports the following domain configurations:

  • For test environments: one administration server

  • For production environments: one administration server and one managed server

An ASAP domain consists of one administration and one optional managed WebLogic Server instance and their associated resources possibly distributed over two machines. A managed server obtains its configuration from the administration server upon startup. Consequently, the administration server should be started before the managed server.

Figure 2-2 shows a WebLogic domain within ASAP.

Figure 2-2 WebLogic Domain Within ASAP

Description of Figure 2-2 follows
Description of ''Figure 2-2 WebLogic Domain Within ASAP''

In a test environment, you can create a domain that consists of an administration server and deploy ASAP components to this server instance. However, in a production environment, Oracle recommends that the administration server reside on a dedicated physical machine and ASAP components be deployed to the managed server only (see Figure 2-3). The managed server is assigned to a physical machine. The administration server connects to a machine's node manager, which the administration server uses to monitor, start, stop, and restart a domain's managed server.

Figure 2-3 Administration and Managed Server

Description of Figure 2-3 follows
Description of ''Figure 2-3 Administration and Managed Server''

The node manager monitors the health of all servers on a machine and controls the restarting of failed servers.

Each domain contains a configuration file – domain_home/config/config.xml – that stores changes to managed objects so that they are available when WebLogic Server is restarted.

Do not manually modify the config.xml file. If you must make configuration changes use the WebLogic Server Administration Console.

For more information about config.xml, see the latest Oracle Fusion Middleware WebLogic Server documentation.

Configuring Domain Networks

In a simple server setup, you assign a single network address and port number to each WebLogic Server instance.

You can configure the domain with multiple port numbers to improve performance and solve common networking problems. These port numbers allow you to:

  • Separate administration traffic from application traffic in a domain by creating an administration channel.

  • Improve network throughput by using multiple NICs with a single WebLogic Server instance.

  • Designate specific NICs or multiple port numbers on a single NIC for use with specific WebLogic Server instances.

  • Physically separate external, client-based traffic from internal, server-based traffic in a domain.

  • Prioritize network connections that servers use to connect to other servers in a domain.

If your domain contains a managed server that is running on a different machine or if your domain contains clients that use different protocols, you can use network channels. Using a single custom channel with multiple servers simplifies network configuration for a domain – changing a channel configuration automatically changes the connection attributes of all servers that use the channel.

You can use multiple channels to segment network traffic by protocol, listen ports, or any other channel configuration property. For example, you can use two channels with a single server to tailor the default connection properties for secure vs. non-secure traffic. You can also use multiple channels to separate external, client traffic from internal, server-to-server traffic.

Most WebLogic Server installations use one or more of the following common types of channels:

  • Default Channel – WebLogic Server automatically creates a default channel to describe the listen address and listen port settings associated with the ServerMBean. You can view the default channel configuration during server startup.

  • Administration Channel – You can define an optional administration port to separate administration traffic from application traffic in your domain. When you enable the administration port, WebLogic Server automatically generates an Administration Channel based on the port settings.

  • Custom Channels – A custom channel is a channel that you define and apply in a domain (rather than a channel that WebLogic Server automatically generates).

A network channel defines the basic attributes of a network connection to WebLogic Server including:

You configure network channels as distinct entities in the Administration Console, and then assign one or more channels to servers in a domain. The server instances to which you assign a channel use the port numbers and protocol configuration associated with the channel, instead of the default network configuration.

For information about using network channels, see the latest Oracle Fusion Middleware WebLogic Server documentation.

Node Manager

Node Manager is a standalone Java program provided with WebLogic Server that you can use to:

  • Start remote Managed Server.

  • Restart Managed Server that have shut down unexpectedly (for example, due to a system crash, hardware reboot, or server failure).

  • Automatically monitor the health of Managed Server and restart server instances that have reached the "failed" health state.

  • Shut down or force the shut down of a Managed Server that has failed to respond to a shutdown request.

For more information about using the node manager, see the latest Oracle Fusion Middleware WebLogic Server documentation.