Skip navigation.

Product Description

  Previous Next vertical dots separating previous/next from contents/index/pdf Contents View as PDF  
Get
Adobe Reader

Software Module Descriptions

The following sections describe WebLogic Network Gatekeeper software modules:

 


SLEE

The SLEE is a CORBA based service logic execution environment where all SLEE services are executed. By using CORBA also for management, the SLEE and its services can be managed remotely.

The SLEE supports version handling of services without restarting the SLEE process. When a new service version is installed, the new version takes care of all new traffic involving the service. The old version has to finish the traffic it is currently handling before it can be removed. When the service is removed, all files belonging to that service are automatically deleted from the system.

The SLEE process is supervised by the SLEE agent. The agent restarts the SLEE process if it terminates unexpectedly. When the SLEE is restarted (by the SLEE agent or manually), the services' restart order and previous operating states are retrieved from the database.

 


SLEE services

All software modules installed and run in the SLEE are regarded as SLEE services. SLEE services can have different behaviour in relation to the SLEE and the other SLEE services:

The SLEE gets information about a service's behaviour at service installation when it reads the service's deployment descriptor. The XML based deployment descriptor also describes other SLEE service characteristics such as service name and default settings.

An installed SLEE service can have one of the following states:

State

Description

Installed

The service software is installed in the SLEE.

Started

The service is started. If the service is manageable, it will also be available in the WebLogic Network Gatekeeper management tool.

Activated

The service is activated, that is, in its normal running state and accepts CORBA requests through its accessible interface.

Suspended

A sub-state between Activated and Started that is used for graceful shutdown of services. The service does not accept new requests, but it will finish all current assignments.

Error

The service has raised too many critical alarms and has been taken out of service by the SLEE. The allowed number of critical alarms is configured at service installation.

All service states are automatically restored in case of a SLEE restart.

SLEE service load balancing

There are two versions of load balancing. In the first case, the overloaded service instance passes on the request to another service instance that is not overloaded.

In the other case, the severely overloaded service informs the requesting service that it is overloaded and rejects the request. The requesting service takes care of finding another instance of the service. In the following sections, load balancing for each SLEE service type will be further explained.

 


SLEE utility services

Utility services are SLEE services that provide common support functions that can be used by all other SLEE services, see Table 16-1.

A utility service is called when another SLEE service wants to use the function the utility service provides.

Table 16-1

Utility Service

Provided Function

Alarm

Receives alarms and stores them in the database. Makes the alarms available to registered listeners through CORBA.

Charging

Receives CDRs and stores them in the database. Makes the CDRs available to registered listeners through CORBA.

Event log

Receives events and stores them in the database. Makes the events available to registered listeners through CORBA.

Global counters

Handles counters that are used by more than one SLEE instance.

Plug-in manager

Keeps track of the installed network plug-ins and related routes. Handles load balancing and high availability from the service capability modules towards the plug-ins.

Policy

Keeps track of all registered policies and makes policy decisions. For more information about the Policy service, see Policy.

Service capability manager

Keeps track of the installed service capability modules. Handles load balancing and high availability from the plug-ins towards the service capability modules.

Statistics

Recieves usage information and stores it in the database.

Task manager

Handles thread pools and task queues.

Time

Provides timers and timestamps when requested.

Trace

Makes it possible to retrieve trace information from a service that is suspected to be faulty. Saves the trace information to a file.

Utility Services

 


Service capability modules

Service capability modules provide CORBA based service capability interfaces. These interfaces are used by SESPA (see SESPA modules).

This section describes the functionality provided by a service capability module independently of the service capability it implements. For information about the service capabilities supported by WebLogic Network Gatekeeper, see Service Descriptions.

The main tasks performed by the service capability modules are:

Figure 16-1 shows a service capability module connecting an application to two different networks through two network plug-ins. Also, the service capability module's interfaces to the charging, policy, plug-in manager and service capability manager utility services are shown.


 

Charging data generation

The service capability modules are responsible for collection and storage of charging data, see CDR based charging.

Plug-in selection

The applications' requests are routed to different network plug-ins depending on the destination address plan and address range. A plug-in manager included in each SLEE provides the routing functionality. All routing data is available in the database Routes to individual plug-ins are specified using regular expressions that match the addresses to be routed to the connected networks.

Policy enforcement

The service capability modules enforce the policies defined in the service provider,application, and node SLAs. The policies are enforced in the Policy Enforcement Points (PEPs). For more information on policy, see Policy.

Load balancing

An overloaded service capability module can ask the SLEE if there are other instances of the service capability module having normal load levels. If there are, the request to use the service capability module is forwarded to one of the other service capability module instances.

For network initiated events, the service capability module manager handles load balancing towards service capability module instances according to a round robin schema.

High availability

From the plug-in perspective, the service capability module manager has a list of references to all service capability module instances of each type. When an application registers a listener for a network triggered event, a reference to that application is stored in the internal database. If a service capability module instance becomes unavailable, the plug-in gets a new service capability module instance reference from the service capability module manager and forwards the network triggered event to the new service capability module instance. The new service capability module instance uses the reference in the database to notify the application.

 


SESPA modules

SESPA modules provide stateless interfaces for the service capabilities. The states are stored persistently in the database and requests within one service session can be distributed across multiple SLEE instances for load balancing and high availability.

The SESPA modules are used for the Web Service access. The switches handle the initial HTTP access high availability and load distribution among individual SESPA module instances.

Load balancing

The SESPA modules balance the load between available service capability modules of the same type.

High availability

The SESPA modules provides high availability between available service capability modules of the same type.

 


Network plug-ins

A network plug-in is a SLEE service connecting a service capability module to a service node, a third party OSA/Parlay SCS, or other system in the underlying networks. For a list of available core network plug-ins, see Supported network protocols.

A plug-in consists of two parts, a service capability module specific part implementing the service capability module's plug-in interface and a network specific part interfacing the network. The plug-in converts the service capability module's CORBA based plug-in interface to fit the interface exposed by the network. New protocol/network support can be added without affecting the service capability module. New plug-ins are added in run-time. Even multiple versions of a network protocol can be handled using multiple plug-ins.


 

When a network plug-in is installed in the WebLogic Network Gatekeeper, it registers itself in the plug-in manager instance of the SLEE it is being installed in. The SLEE will then notify the other SLEEs in the system that the new plug-in is available for use. The plug-in manager's routing function routes service requests to different plug-ins depending on the destination party's address format (supported address plan and address range). The routes are specified through the plug-in manager.

For a network triggered event, the service capability module manager provides the plug-in with a service capability module instances reference according to a round robin schema.

Load balancing

The plug-in manager divides the requests from the service capability module according to a round robin schema. The more installed plug-ins of the same type supporting the same address type and routing, the lower the load on the individual instances.

High availability

If a plug-in is unavailable, the service capability module gets a new plug-in instance reference from the plug-in manager according to a round robin schema.

 


Database

Storage and replication

The WebLogic Network Gatekeeper database is a replicated SQL database accessed through JDBC. The two sides of the database run on different servers. If a SLEE service wants to use an external database, this is handled through JDBC.

A service that wants to access the database requests a database connection from the SLEE. The SLEE then tries to get a connection to the master database. In case of failure, the SLEE tries to get a connection to the slave. If the SLEE gets a connection to the slave but not to the master, the SLEE changes the slave to master and raises an alarm. The requesting service will not notice any of this as long as it gets its database connection. Services will only be informed if the SLEE fails to provide a database connection.

For information about database security, see Database security.

 

Skip navigation bar  Back to Top Previous Next