This chapter provides information on the Command-Line Interface (CLI).
It contains the following sections:
The Oracle Adaptive Access Manager Command-Line Interface (CLI) scripts enable users to perform various tasks instead of using OAAM Admin.
You can use Oracle Adaptive Access Manager CLI scripts for the following:
Import or export objects like policies, groups, conditions, and other modules without using the graphical user interface.
Load location data into the Oracle Adaptive Access Manager database
Setting up the CLI environment involves the following tasks:
Set up the CLI work folder
Set up the Credential Store Framework (CSF) configuration
Set up the Oracle Adaptive Access Manager database credentials
Copy the CLI folder $IDM_ORACLE_HOME/oaam/cli/oaam_cli to a working directory, for example, "oaam_cli".
Note:
This task is required since it is not recommended to edit or change any files that are inside the IDM_ORACLE_HOME folder (the folder where you installed the IDM software).Execute the following command:
cp -r <IDM_ORACLE_HOME>/oaam/cli ~/work/oaam_cli
Execute the following command:
xcopy/s <IDM_ORACLE_HOME>\oaam\cli c:\work\oaam_cli
Select "D=directory" when it prompts so that entire folder can be copied.
Choose one of the following mechanisms to access the Oracle Adaptive Access Manager Encryption keys stored in the Credential Store Framework (CSF):
CSF without Mbeans
CSF with MBeans
Important notes about this approach are listed as follows:
This method requires that you run the Oracle Adaptive Access Manager command-line utility scripts on the same computer as the WebLogic Server.
This method does not require you to specify the WebLogic Administrator and password.
This method is not recommended if Oracle Adaptive Access Manager is deployed in a clustered environment
To use this mechanism:
Go to the work folder where you copied the cli folder. Open the file, conf/bharosa_properties/oaam_cli.properties
in a text editor and set the following properties:
Property Name | Notes about Property Value |
---|---|
oaam.csf.useMBeans | false |
oaam.jps.config.filepath | Set the absolute path of jps-config-jse.xml. Usually, it resides in $DOMAIN_HOME/config/fmwconfig folder |
Open the file, conf/bharosa_properties/oaam_core.properties
in a text editor and set the following properties related to the Oracle Adaptive Access Manager database:
Property Name | Notes about Property Values |
---|---|
oaam.db.url | Specify valid JDBC URL of the Oracle Adaptive Access Manager database. Make sure there are no typos. |
oaam.db.additional.properties.file | Leave this as blank if there are no additional toplink properties.
Otherwise specify the name of the properties file that has additional toplink properties. Make sure the file is in the same folder as oaam_core.properties |
oaam.db.driver | oracle.jdbc.driver.OracleDriver (Change this value only if the Oracle Adaptive Access Manager schema is in non-oracle database) |
oaam.db.min.read-connections | 1 (Do not change this value unless required) |
oaam.db.max.read-connections | 25 (Do not change this value unless required) |
oaam.db.min.write-connections | 1 (Do not change this value unless required) |
oaam.db.max.write-connections | 25 (Do not change this value unless required) |
Make sure the following jar files are set in Java classpath environment variable.
Path | Required Jars |
---|---|
$WL_HOME\oracle_common\modules\oracle.jps_11.1.1 |
|
$WL_HOME oracle_common/webservices/wsclient_extended.jar | wsclient_extended.jar |
$WL_HOME/oracle_common/modules/oracle.iau_11.1.1 | fmw_audit.jar |
Important notes about this approach:
This method is recommended if Oracle Adaptive Access Manager is deployed in a clustered environment.
This method permits you to remotely connect to the Oracle Adaptive Access Manager WebLogic Server.
This method requires you to specify the Oracle Adaptive Access Manager WebLogic Admin user and password.
To configure the Oracle Adaptive Access Manager Database details with CSR with MBeans, follow these steps:
Go to the work folder where you copied the cli folder. Open the file conf/bharosa_properties/oaam_cli.properties
in a text editor and set the following properties:
Property Name | Notes about Property Value |
---|---|
oaam.csf.useMBeans | true (Keep it as true) |
oaam.adminserver.hostname | <Host name where WebLogic Admin Server runs> |
oaam.adminserver.port | <Port number of WebLogic Admin Server. Usually it is 7001> |
oaam.adminserver.username | <Username of the WebLogic admin user. Usually it is WebLogic> |
oaam.adminserver.password | <Password of the WebLogic admin user> |
Open the file, conf/bharosa_properties/oaam_core.properties
in a text editor and set the following properties related to the Oracle Adaptive Access Manager database:
Property Name | Notes about Value |
---|---|
oaam.db.url | Specify valid JDBC URL of the Oracle Adaptive Access Manager database. Make sure there are no typos. |
oaam.db.additional.properties.file | Leave this as blank if there are no additional toplink properties.
Otherwise specify the name of the properties file that has additional toplink properties. Make sure the file is in the same folder as oaam_core.properties |
oaam.db.driver | oracle.jdbc.driver.OracleDriver (Change this value only if the Oracle Adaptive Access Manager schema is in non-oracle database) |
oaam.db.min.read-connections | 1 (Do not change this value unless required) |
oaam.db.max.read-connections | 25 (Do not change this value unless required) |
oaam.db.min.write-connections | 1 (Do not change this value unless required) |
oaam.db.max.write-connections | 25 (Do not change this value unless required) |
Make sure the following jar files are set in Java classpath environment variable.
Path | Required Jars |
---|---|
$WL_HOME\oracle_common\modules\oracle.jps_11.1.1 |
|
$WL_HOME\wlserver_10.3\server\lib |
|
$WL_HOME/oracle_common/modules/oracle.iau_11.1.1 | fmw_audit.jar |
Refer to Section 2.4.7, "Setting Up Oracle Adaptive Access Manager Database Credentials in the Credential Store Framework" for steps.
Note:
If you want to use persistence.xml instead of setting the Oracle Adaptive Access Manager database credentials in CSF, go through the following steps. However this approach is not recommended and supported.Go to the work folder where you copied the cli folder. Open the file conf/bharosa_properties/oaam_cli.properties
in a text editor and set the property value of oaam.db.toplink.useCredentialsFromCSF
to false
.
Update the Oracle Adaptive Access Manager database connection details in the META-INF/persistence.xml
file by editing the relevant eclipselink.jdbc
properties, as in the following examples:
<property name="eclipselink.jdbc.driver" value="oracle.jdbc.driver.OracleDriver"/> <property name="eclipselink.jdbc.url" value="jdbc:oracle:thin:@<dbhost.mydomain.com>:1521/<SERVICE_NAME>"/> <property name="eclipselink.jdbc.user" value="<OAAM DB USER>"/> <property name="eclipselink.jdbc.password" value="< DB Password >"/>
The Oracle Adaptive Access Manager CLI is a tool in which you can perform various tasks using the keyboard rather than OAAM Admin.
You can use Oracle Adaptive Access Manager CLI in the following ways:
import or export objects like policies, groups, conditions, and other modules without using the graphical user interface
perform import and export between different environments (for example, QA and staging) using a program.
load location data
Set up the Oracle Adaptive Access Manager CLI environment before you run any of the scripts. For details refer to Section 23.2, "Setting Up the CLI Environment."
To obtain usage information on Oracle Adaptive Access Manager CLI for import or export:
At the command line, change to the Oracle Adaptive Access Manager CLI work folder.
Run runImportExport.sh script without any arguments.
$ sh runImportExport.sh
This subsection provides details about the command-line options.
To perform an import or export, you enter commands coupled with:
information for actions like import or export
information for module like policies, groups, validations, or others
arguments for whether to export or import different modules
additional parameters for the import and export features.
Use this syntax for the command-line interface (typed in a single line with no line breaks or carriage returns):
sh runImportExport.sh
|-- action < import | export >
| +-- <export>
| + |-- entitycmd < add | delete >
| + |-- exportmode < zip | file >
| + |-- includeelements < true | false >
| + |-- listelemcmd < add | delete | replace >
| + -- outdir < path_to_dest_dir >
| +-- <import>
| -- batchmode < true | false >
-- module < rules | groups | policy(models) | questions | validations | answerHint | properties | conditions | questionsForTranslation | patterns | entities | transactions | dynamicActions | taskGroups >
+-- <groups>"
-- submodule < all | users | alerts | ... >
+-- <properties>"
-- name < propertyId >
-- loadType < database | properties | system >
+-- <conditions>"
-- forceUpdate < true|false >
-- adminUser < username >
-- adminPassword < password >
The options are described in Section 23.3, "Using CLI.".
Parameters | Description |
---|---|
entitycmd |
Indicates whether the entities for the module being exported would be added to the database or deleted from the database on importing the file. Default is add |
exportmode |
Indicates whether the result of export will be a ZIP file or XML file. Default is ZIP. |
includeelements |
Indicates whether the group elements need to be included in export. Default is true. This is applicable only for export of groups. |
listelemcmd |
Indicates whether the group elements will be added, deleted for replaced in the database when this file is imported. Default is add. This is applicable only for groups export. |
outdir |
The output folder where the resulting files from export will be saved. Default value is current folder. |
batchmode |
Controls the database commits when list items are imported in a batch. When the batch reaches its limit, the objects are inserted into the database. If batchmode is equal to true, the database update is also committed. By default, batchmode is set to false. |
submodule |
Used to specify the type of groups that should be included in export. Default value is all. This is applicable for groups export. |
loadType |
Used to specify the type of properties that need to be exported. If not specified then all type of properties are included. This is applicable for properties export. |
The list of supported modules for Oracle Adaptive Access Manager 11g is shown in Table 23-2.
Module | Entity Name |
---|---|
groups |
groups |
policies |
models |
questions |
questions |
validations |
validations |
answer hint |
answerHint |
properties |
properties |
conditions |
conditions |
questions for translation |
questionsForTranslation |
patterns |
patterns |
entities |
entities |
transactions |
transactions |
configurable actions |
dynamicActions |
scheduler task groups |
taskGroups |
The 10g policy set and policy modules are not longer valid in 11g.
The difference between CLI import/export in 10g and 11g is that the module models
and policies
means the same: -module policy
is same as -module models
.
Examples of import options are as follows:
To import from a file, issue the following command:
$ sh runImportExport -action import -module properties exportData\properties\<properties_zip_file>
To import the contents of a ZIP file, issue the following command:
$ sh runImportExport.sh -action import -module <supported_module> <filename>
Here are examples:
To upload challenge questions, issue the following command:
$ sh runImportExport.sh -action import -module questions <filename>
To import conditions, issue the following command:
$ sh runImportExport.sh -action import -module conditions <filename>
To import policies, run the following command
$ sh runImportExport.sh -action import -module models <filename>
To import groups, run the following command
$ sh runImportExport.sh -action import -module groups <filename>
Import a Groups of Users in an XML File
To import a group of users in an XML file, issue the following command:
$ sh runImportExport.sh -action import -module groups <abc.xml>
Import Multiple Policies from Multiple ZIP Files
To import multiple policies in multiple XML file, issue the following command:
$ sh runImportExport.sh -action import
-module models <ManyModels.zip> <OneModel.zip
>
Import Multiple Questions from Multiple ZIP Files
To import multiple questions from multiple ZIP files, issue the command:
$ sh runImportExport.sh -action import
-module questions <ManyQuestions.zip> <OneQuestions.zip>
Import Multiple Validations from Multiple ZIP Files
To import multiple validations from multiple ZIP files, issue the command:
$ sh runImportExport.sh -action import
-module validations <ManyValidations.zip> <OneValidations.zip
>
Note:
You may note that inapplicable options will be silently ignored (for example, theoutdir
option used for import) and options with lower precedence will be overridden (for example, listelemcmd
is irrelevant when includeelements
is equal to false
).Here are examples of export options:
To export all the properties irrespective of loadtype, issue the following command:
$ sh runImportExport.sh -action export -module properties
To export all the properties of any particular loadtype, issue the following command:
$ sh runImportExport.sh -action export -module properties -loadtype < database | properties | system>
For example, to export all the properties of database loadtype, issue the following command:
$ sh runImportExport.sh -action export -module properties -loadtype database
To export any single property, issue the following command:
$ sh runImportExport.sh -action export -module properties -name <propertyname>
When performing an export, if no entity names are specified, all the entities of that particular module (and submodule) are exported. Thus, specifying names is not necessary for export.
To export all entities of a particular module, issue the following command:
$ sh runImportExport.sh -action export -module <module entity_name>
To export all policies, issue the following command:
$ sh runImportExport.sh -action export -module models
To export groups, issue the following command:
$ sh runImportExport.sh -action export -module groups -submodule users
To export questions, issue the following command:
$ sh runImportExport.sh -action export -module questions
CLI exports all the related categories, validations, and locale information to make these questions complete.
To export all validations, issue the following command:
$ sh runImportExport.sh -action export -module validations
To export conditions, issue the following command:
$ sh runImportExport -action export -module conditions
Export Condition with Delete Script
To export conditions with a delete script, issue the following command:
$ sh runImportExport -action export -module conditions -entitycmd delete
Export Specific Groups, Grp1 and Grp2, without Elements for Delete
To export specific groups without elements, issue the following command:
$ sh runImportExport.sh -action export
-module groups -includeelements false -entitycmd delete Grp1 Grp2
entitycmd indicates whether the entities for the module being exported would be added to the database or deleted from the database on importing the file.
In this example, Groups Grp1 and Grp2 are deleted from the database when the resulting file from this export command is imported back.
Export Groups with List Command Replace
To export groups with list command replace, issue the following command:
$ sh runImportExport.sh -action export -module groups -listelemcmd replace G1 G2
The group elements for groups G1 and G2 will be replaced by the elements in the ZIP file during the import of the file resulting from this export command. For example, if group G1 has elements e1 and e2 in the database, and the ZIP file has elements e2 and e3, after the execution of the import, group G1 will have elements e2 and e3. However, if the value of listelemcmd had been "add," then after the import, G1 would have elements e1, e2 and e3. If the value specified was "delete," then after import, group G1 would have element e1 only as e2 would have been deleted.
Export Policies to DESTDIR, But Do Not Create a ZIP File
To export policies to DESTDIR, but not create a ZIP file, issue the following command:
$ sh runImportExport.sh -action export -outdir DESTDIR -exportmode file
-module groups Group1 Group2
If exportmode is "file," then the data is exported as one or more XML files.
Note:
The command does not work for modules like policies and questions which have dependent data. A error will occur with the message that a ZIP stream is expected.The batchmode
option controls the database commits when list items are imported in a batch. When the batch reaches its limit, the objects are inserted into the database. If batchmode
is equal to true
, the database update is also committed. By default, batchmode
is set to false
.
batchmode {true | false}
Note:
batchmode
is not to be used in conjunction with importing other modules. It should be used with Lists only.Here is an example of batchmode usage:
To import groups in batch mode, issue the following command:
$ sh runImportExport.sh -action import -module groups -batchmode true
The examples preceding cover only those scenarios where the entities to be processed are of the same type. To be able to process different types of modules together, the command line has been altered to support multiple modules. All entities specified in a command are processed in a single transaction, which allows a related set of entities to be used together to ensure the "all or nothing" approach.
Here are examples of importing modules together:
Import Various Modules Together
To import various modules together, issue the following command:
$ sh runImportExport.sh -action import
-module groups 5grps.zip
-module models model1.zip
Note:
The action parameter is not to be repeated, but only the command from the-module
parameter is repeated as per the different items to be imported. The order of the items supplied in the command line is retained for both, the type of entities, and the files for each entity.Support for multiple modules raises many questions:
What about the extra options?
How to specify options common to all modules?
How to specify options specific to a certain module, even though it has been defined as a common option?
The following things can be kept in mind:
When writing an import or export command, keep in mind that -module
is considered as the beginning of a new set of options. Everything that follows -module
forms one set of options.
Everything that is specified before the first -module
option is taken as a set of common options, which are applied to each -module
.
If a certain option is specified as a common option and is also specified as a module specific option, the specific value will take precedence.
Examples are:
Export Everything to "all" Directory, but Policies to "policies" directory
To export everything to "all" directory, but policies to "policies" directory, issue the following command:
$ sh runImportExport.sh -action export -outdir all
-module models -outdir models
-module groups
Export Groups G1 and G2 for Delete Items, and G3 and G4 for Replace Items
To export groups G1 and G2 for delete items and G3 and G4 for replace items, issue the following command:
$ sh runImportExport.sh -action export
-module groups -listelemcmd delete G1 G2
-module groups -listelemcmd replace G3 G4
Transaction handling is different from imports and exports.
Import operates strictly in one transaction, except when using batch mode for importing lists. If there is any error in importing any entity for any module, the entire process is rolled back. Thus, no database updates will be committed. You may also note that though import strictly follows one transaction, it does not break down if it encounters invalid items in a list (for example, importing a city with an incorrect state or a country, and so on.) A warning message is logged and the import process continues, ignoring such items.
Export operates on a "best effort" basis. If an export for any entity fails, it continues with the next entity. The reason is that export does not perform any database updates. It only selects information from the database and places it into files.
To use the IP location loader utility, follow the setup instructions in Section 23.4, "Importing IP Location Data."
This section describes a utility for importing the IP location data into the Oracle Adaptive Access Manager database. This data is used by the risk policies framework to determine the risk of fraud associated with a given IP address.
This section contains the following subsections:
Set up the Oracle Adaptive Access Manager CLI environment before you run any of the scripts. For details refer to Section 23.2, "Setting Up the CLI Environment."
To load data to Microsoft SQL Server database, sqljdbc.jar should be copied to a third party directory. This file can be downloaded for free from Microsoft at http://www.microsoft.com/downloads/details.aspx?FamilyID=6d483869-816a-44cb-9787-a866235efc7c&DisplayLang=en
Make a copy of the sample bharosa_location.properties file.
cp sample.bharosa_location.properties bharosa_location.properties
Update bharosa_location.properites
with the location data details in the following properties. The location data should be obtained from one of the supported vendors (ip2location, maxmind, quova).
Note that the properties marked as "Advanced" are not to be changed in general.
Table 23-3 IP Loader Properties
IP Loader Properties | Description |
---|---|
location.data.provider |
quova or ip2location or maxmind |
location.data.file |
/tmp/quova/EDITION_Gold_2008-07-22_v374.dat.gz |
location.data.ref.file |
/tmp/quova/EDITION_Gold_2008-07-22_v374.ref.gz |
location.data.anonymizer.file |
/tmp/quova/anonymizers_2008-07-09.dat.gz |
location.data.location.file |
only if maxmind location data is to be loaded; else leave this property unset/blank |
location.data.blocks.file |
only if maxmind location data is to be loaded; else leave this property unset/blank |
location.data.country.code.file |
only if maxmind location data is to be loaded; else leave this property unset/blank |
location.data.sub.country.code.file |
only if maxmind location data is to be loaded; else leave this property unset/blank |
location.loader.database.pool.size |
number of threads to use to update the database |
location.loader.dbqueue.maxsize |
Advanced: maximum number of location records to be kept in queue for database threads |
location.loader.cache.location.maxcount |
Advanced: maximum number of location records to be kept in cache, while updating existing location data |
location.loader.cache.split.maxcount |
Advanced: maximum number of location split records to be kept in cache, while updating existing location data |
location.loader.cache.anonymizer.maxcount |
Advanced: maximum number of anonymizer records to be kept in cache, while updating existing location data |
location.loader.database.commit.batch.size |
Maximum number of location records to batch before issuing a database commit |
location.loader.database.commit.batch.seconds |
Maximum time to hold an uncommitted batch |
location.loader.cache.isp.maxcount |
Maximum number of ISP records to be kept in cache |
Before running the IP location loader, Blocks.csv file from MaxMind must be preprocessed with the following commands:
$ mv Blocks.csv Blocks-original.csv $ sed -e 's/\"//g' Blocks-original.csv | sort -n -t, -k1,1 -o Blocks.csv
Refer to Chapter 2, "Setting Up the Oracle Adaptive Access Manager Environment" for information on setting up encryption.
After completing the setup detailed preceding, run the following command to load the location data into the Oracle Adaptive Access Manager database.
From bash shell, execute loadIPLocationData.sh
From Windows command prompt, execute loadIPLocationData.cmd
The command returns 0 when the data load is successful; on failure it returns 1.
The IP location loader utility reads the information from the IP location data files (from Quova or ip2location or maxmind) to populate the IP location tables in the Oracle Adaptive Access Manager system. The first time the utility is run against a new database, it inserts one or more rows into the vcrypt_ip_location_map for each record in the data file. It also creates a new record in vcrypt_country for each unique country name in the data file, a new record in vcrypt_state for each unique combination of country name and state name in the data file, and a new record in vcrypt_city for each unique combination of country name, state name, and city name in the data file.
When the IP location loader utility is run with a new data file against an already populated database, it skips records in the datafile that have matching, identical records in the vcrypt_ip_location_map table. It creates a new row in the vcrypt_ip_location_map for each record in the data file whose FROM_IP_ADDR does not already appear in the database. It updates the rows in the vcrypt_ip_location_map whose FROM_IP_ADDR matches the record in the data file, but has different data in other columns. The utility also creates new countries, states, and cities that do not already exist in the database.
The Quova data file is a pipe-delimited ('|') file, with 29 fields on each line, and one record per line. The information in these tables comes from Quova's GeoPoint Data Glossary. In the following table, IP represents the vcrypt_ip_location_map table, CO represents the vcrypt_country table, ST represents the vcrypt_state table, and CI represents the vcrypt_city table.
The file layout is as follows:
Quova Field | Oracle Adaptive Access Manager Field | Description |
---|---|---|
Start IP |
IP.from_ip_addr |
The beginning of the IP range, also used as an alternate primary key on the vcrypt_ip_location_map table. |
End IP |
IP.to_ip_addr |
The end of the IP range. |
CIDR |
(not used) |
|
Continent |
(not used) |
|
Country |
CO.country_name |
The country name. |
Country ISO2 |
(not used) |
|
Region |
(not used) |
|
State |
ST.state_name |
The state name. |
City |
CI.city_name |
The city name. |
Postal code |
(not used) |
|
Time zone |
(not used) |
|
Latitude |
CI.latitude |
The latitude of the IP address. Positive numbers represent North, and negative numbers represent South. |
Longitude |
CI.longitude |
The latitude of the IP address. Positive numbers represent East, and negative numbers represent West. |
Phone number prefix |
(not used) |
|
AOL Flag |
mapped to IP.isp_id |
Tells whether the IP address is an AOL IP address. |
DMA |
(not used) |
|
MSA |
(not used) |
|
PMSA |
(not used) |
|
Country CF |
IP.country_cf |
The confidence factor (1-99) that the correct country has been identified. |
State CF |
IP.state_cf |
The confidence factor (1-99) that the correct state has been identified. |
City CF |
IP.city_cf |
The confidence factor (1-99) that the correct city has been identified. |
Connection type |
mapped to IP.connection_type |
Describes the data connection between the device or LAN and the internet. See the Connection Type mapping. |
IP routing type |
mapped to IP.routing_type |
Tells how the user is routed to the internet. See the IP Routing Type mapping. |
Line speed |
mapped to IP.connection_speed |
Describes the connection speed. This depends on connection type. See the Connection Speed mapping. |
ASN |
IP.asn |
Globally unique number assigned to a network or group of networks that is managed by a single entity. |
Carrier |
IP.carrier |
The name of the entity that manages the ASN entry. |
Second Level Domain |
mapped to IP.sec_level_domain |
The second level domain of the URL. For example, Name in www.oracle.com. This is mapped through the Quova reference file. |
Top Level Domain |
mapped to IP.top_level_domain |
The top level domain of the URL. For example,. com in www.oracle.com. This is mapped through the Quova reference file. |
Registering Organization |
(not used) |
A table for routing types mapping is shown in Table 23-5.
Table 23-5 Routing Types Mappings
Routing Type | Oracle Adaptive Access Manager ID | Description |
---|---|---|
fixed |
1 |
User IP is at the same location as the user. |
anonymizer |
2 |
User IP is located within a network block that has tested positive for anonymizer activity. |
aol |
3 |
User is a member of the AOL service; The user country can be identified in most cases; any regional info more granular than country is not possible. |
aol pop |
4 |
User is a member of the AOL service; The user country can be identified in most cases; any regional info more granular than country is not possible. |
aol dialup |
5 |
User is a member of the AOL service; The user country can be identified in most cases; any regional info more granular than country is not possible. |
aol proxy |
6 |
User is a member of the AOL service; The user country can be identified in most cases; any regional info more granular than country is not possible. |
pop |
7 |
User is dialing into a regional ISP and is likely to be near the IP location; the user could be dialing across geographical boundaries |
superpop |
8 |
User is dialing into a multistate or multinational ISP and is not likely to be near the IP location; the user could be dialing across geographical boundaries. |
satellite |
9 |
A user connecting to the Internet through a consumer satellite or a user connecting to the Internet with a backbone satellite provider where no information about the terrestrial connection is available. |
cache proxy |
10 |
User is proxied through either an internet accelerator or content distribution service. |
international proxy |
11 |
A proxy that contains traffic from multiple countries. |
regional proxy |
12 |
A proxy (not anonymizer) that contains traffic from multiple states within a single country. |
mobile gateway |
13 |
A gateway to connect mobile devices to the public internet. For example, WAP is a gateway used by mobile phone providers. |
none |
14 |
Routing method is not known or is not identifiable in the preceding descriptions. |
unknown |
99 |
Routing method is not known or is not identifiable in the preceding descriptions. |
Table 23-6 shows connection types mappings.
Table 23-6 Connection Types Mappings
Connection Type | Oracle Adaptive Access Manager ID | Description |
---|---|---|
ocx |
1 |
This represents OC-3 circuits, OC-48 circuits, etc. which are used primarily by large backbone carriers. |
tx |
2 |
This includes T-3 circuits and T-1 circuits still used by many small and medium companies. |
satellite |
3 |
This represents high-speed or broadband links between a consumer and a geosynchronous or lowearth orbiting satellite. |
framerelay |
4 |
Frame relay circuits may range from low to highspeed and are used as a backup or alternative to T-1. Most often they are high-speed links, so GeoPoint classifieds them as such. |
dsl |
5 |
Digital Subscriber Line broadband circuits, which include aDSL, iDSL, sDSL, etc. In general ranges in speed from 256k to 20MB per second. |
cable |
6 |
Cable Modem broadband circuits, offered by cable TV companies. Speeds range from 128k to 36MB per second, and vary with the load placed on a given cable modem switch. |
isdn |
7 |
Integrated Services Digital Network high-speed copper-wire technology, support 128K per second speed, with ISDN modems and switches offering 1MB per second and greater speed. Offered by some major telcos. |
dialup |
8 |
This category represents the consumer dialup modem space, which operates at 56k per second. Providers include Earthlink, AOL and Netzero. |
fixed wireless |
9 |
Represents fixed wireless connections where the location of the receiver is fixed. Category includes WDSL providers such as Sprint Broadband Direct, as well as emerging WiMax providers. |
mobile wireless |
10 |
Represents cellular network providers such as Cingular, Sprint and Verizon Wireless who employ CDMA, EDGE, EV-DO technologies. Speeds vary from 19.2k per second to 3MB per second. |
consumer satellite |
11 |
|
unknown high |
12 |
GeoPoint was unable to obtain any connection type or the connection type is not identifiable in the preceding descriptions. |
unknown medium |
13 |
GeoPoint was unable to obtain any connection type or the connection type is not identifiable in the preceding descriptions. |
unknown low |
14 |
GeoPoint was unable to obtain any connection type or the connection type is not identifiable in the preceding descriptions. |
unknown |
99 |
GeoPoint was unable to obtain any connection type or the connection type is not identifiable in the preceding descriptions. |
This section contains the tables used by the ETL process
The following tables and sequences are used for uploading the Anonymizer data. Make sure the ETL process has sufficient privileges to read and update these tables.
The IP location loader requires read/write access to the following tables:
VCRYPT_IP_LOCATION_MAP
V_IP_LOCATION_MAP_SEQ
V_IP_LOC_MAP_HIST
V_IP_LOC_MAP_HIST_SEQ
V_IP_LOC_MAP_SPLIT
V_IP_LOC_MAP_SPLIT_SEQ
V_IP_LOC_MAP_SPLIT_HIST
V_IP_LOC_MAP_SPLIT_HIST_SEQ
VCRYPT_COUNTRY
V_COUNTRY_SEQ
V_COUNTRY_HIST
V_COUNTRY_HIST_SEQ
VCRYPT_STATE
V_STATE_SEQ
V_STATE_HIST
V_STATE_HIST_SEQ
VCRYPT_CITY
V_CITY_SEQ
V_CITY_HIST
V_CITY_HIST_SEQ
VCRYPT_ISP
VCRYPT_ISP_SEQ
V_ISP_HIST
V_ISP_HIST_SEQ
V_LOC_LOOKUP
V_LOC_LOOKUP_SEQ
V_LOC_UPD_SESS
V_LOC_UPD_SESS_SEQ
V_UPD_LOGS
V_UPD_LOGS_SEQ
VCRYPT_LONG_VALUE_ELEMENT
V_LONG_VALUE_ELEM_SEQ
VCRYPT_VALUE_LIST
V_VALUE_LIST_SEQ
VCRYPT_VALUE_LIST_HIST
V_VALUE_LIST_HIST_SEQ
VCRYPT_CACHE_STATUS
VCRYPT_CACHE_STATUS_SEQ
The loader script returns 0 when the data load is successful; on failure it returns 1.