Oracle9i Application Server Best Practices Release 1 (v1.0.2.2) Part Number A95201-01 |
|
This chapter details a number of security best practices for the Oracle HTTP Server. It will not address security issues pertaining to access of the Oracle database.
This chapter contains these topics:
Unlike most other system and application areas, security is a very open-ended concern, simply because there is no specification which states how one is allowed to break security. Oracle makes no claim, therefore, that this document is a comprehensive review of all Oracle HTTP Server security issues.
Security problems pertaining to intranets are widely understood, because intranets have been deployed for many years. IT managers and personnel have determined important intranet security risks and the appropriate levels of response to combat these risks.
This is not the case for Internet issues, where risks are not generally understood, and appropriate responses to risks have not been generally agreed upon. We will focus, therefore, on issues which matter primarily in Internet environments. We will give only a glance here and there to issues which matter primarily in intranet-restricted applications and services.
We used several criteria when selecting topics for this chapter. To be included, the information should:
We expect our recommended best practices will be rewritten on a regular basis, enhanced and modified in response to the needs of the Oracle community. Many of the topics first appeared as questions and issues on Oracle security-related discussion forums, which we feel gives them especially high priority.
This section discusses firewall architecture from the perspective of organizations wishing to provide Internet-accessible services. We will ignore services available only on intranets, browsers attached to the Internet that access services on the Internet, and intranet-attached processes which need access to Internet services.
There is no single best architecture for accommodating Internet requests requiring access to corporate intranets. Instead, trade-offs must be made between two competing goals:
Neither goal can be totally met, because complete security means no access to services, while complete ease of access means that anyone is free to peruse, corrupt, or modify corporate sites. We can only try to reach a balance between these goals, whose relative priorities are often unclear.
Oracle recommends two approaches, which we believe should satisfy the requirements of most of our customers:
For the purposes of this discussion of Oracle's recommendations, network architecture can be divided into the three regions shown in Figure 4-1. On one side is the wide world of the Internet; on the other side is the corporate intranet. In the middle is the De-Militarized Zone or DMZ (from the military term for an area between two opponents where fighting is prevented), separated from both of them by devices called firewalls. These firewalls block or allow data transfers based on IP address, port, protocol type, or some combination of these. They can also employ stateful inspection technology to detect illegal protocol transitions. For more information on stateful inspection, see "SEC-1: Server Placement".
Firewalls are sometimes defined as including the DMZ, the exterior firewall, and the interior firewall parts of Figure 4-1. In this less common definition, what we call the interior firewall is typically called a router.
The bastion hosts shown in Figure 4-1 are well-fortified servers running initial point of contact protocols, such as HTTP or SMTP. They should be set up with the expectation that outsiders will attempt to break into them. Special care should be taken to ensure that break-ins are difficult. If a break-in does occur, then there should be good fault containment.
The switched connection DMZ architecture shown in Figure 4-2 takes advantage of newer firewall technology, which allows inexpensive switched connection attachments of servers. With switched connections one server cannot see the traffic generated by another server, except for broadcasts. This provides a major fault containment benefit compared to bussed connections, where all devices attached to the bus can view traffic to and from other devices on the bus.
While we recommend these approaches, we do not mean to exclude all alternatives. Other reasonable network structures might give higher priority to security or ease of access. Consideration of the issues, risks, and costs of these alternative approaches, however, should precede any deployment decision.
It should be noted that outgoing requests should be handled differently and have different policies than incoming requests. Thus, a reasonable policy might be to allow outgoing FTP requests but no incoming FTP requests.
Place servers providing Internet services behind an exterior firewall of the stateful inspection type.
It is very important that the servers used to provide Internet services be placed in the appropriate part of the service provider's network architecture. In almost no cases should servers be placed directly on the Internet, where they are vulnerable to too many forms of attack. They should instead be behind an exterior firewall.
The best firewall type is the stateful inspection type, such as those available from Checkpoint (http://www.checkpoint.com)
and other vendors. Stateful inspection means that the firewall keeps track of various sessions by protocol and ensures that illegal protocol transitions are disallowed through the firewall. This blocks the types of intrusion which exploit illegal protocol transitions.
Set exterior firewall rules to allow Internet-initiated traffic only through specific IP and port addresses where SMTP/POP3/IMAP4 or HTTP services are running.
Generally, it is best to provide only SMTP/POP3/IMAP4 (e-mail) and HTTP (browser) services to the Internet, because other protocols are too vulnerable to attack. Where it is feasible, it is best to provide the HTTP and SMTP/POP3/IMAP services on the DMZ, while running applications or accessing databases on the intranet part of the network.
IP and port combinations which are not assigned to running programs should not be permitted. Once messages are received by the HTTP or SMTP/POP3/IMAP server, they can be forwarded from DMZ processes to intranet processes.
Handling of other protocols by processors attached to the DMZ, especially raw TCP/IP or UDP, is not recommended. FTP use results in major security vulnerabilities, because it essentially sends usercodes and passwords in the clear. IIOP opens too many ports without waiting system processes attached. If handling these protocols on the DMZ is a requirement, then the processors doing the handling should have no incoming access to the intranet.
Set interior firewall rules to allow messages through to the intranet only if they originate from servers residing on the DMZ.
Intranet-initiated requests to the DMZ are not a problem, but no direct access to the intranet from the Internet should be allowed. All incoming messages must first be processed on the DMZ.
Send outgoing messages through proxies on the DMZ.
Messages originating from processes in the intranet can be passed directly to the Internet. For more security, they can be forwarded to the Internet by proxies on the DMZ. Many proxy types exist for the different protocols typically operating over TCP/IP. If even more security is desired, then we recommend the switched connection DMZ host architecture shown in Figure 4-2.
Do not store the information of record on bastion hosts.
Information and processing should be segmented such that bastion hosts provide initial protocol server processing and generally do not contain information of a sensitive nature. They should certainly not contain the information of record. That is to say, updates or corruption to information on bastion hosts should not result in updates to the database of record. The database of record and all sensitive processing should reside on the intranet.
Disallow all traffic types unless specifically allowed.
No one can predict what form the next attack on your network might take, and disallowing only the forms taken by past attacks will always leave you one step behind the attackers. We recommend instead that you disallow all types of traffic not required by your organization.
With the bussed connection firewalls discussed in the previous best practices, rules allowing or disallowing traffic between different server/hosts can become quite complex. While in the past there was often concern with building multiple levels of DMZs for better fault containment, most such questions are eliminated with the use of switched connection firewalls.
Some of the previous best practices, applicable to DMZs with bastion hosts, apply as well to switched connection DMZ architecture. Routing all incoming traffic to a DMZ server before forwarding it to the intranet is still a good idea. Also still a best practice is banning messages from the Internet to the DMZ, if the messages have IP addresses used in the DMZ. This preserves the DMZ concept, even though the DMZ is no longer a formal LAN or network.
Switched connection DMZ hosts architecture allows secure segregation of processing tasks. In Figure 4-3 for example, inexpensive servers assisted by cryptographic hardware are used to convert HTTP to HTTPS, while separate hosts are used to segregate HTTP servers from servlet servers on the DMZ.
The switched interior/exterior firewall in this example, which may be built from a number of distinct pieces of hardware and software acquired from different vendors, can provide quite complex routing rules.
Incoming HTTPS traffic might get routed first to reverse proxies, which would convert the HTTPS protocol to HTTP. Such traffic could then be routed to HTTP servers, where requests for static content might be satisfied, while dynamic content requests might be routed to servlet servers. The servlet servers might then route requests to the intranet for further processing and database access. Another category of processors might provide Web cache capabilities.
This architecture provides excellent fault containment. If one of the HTTP servers is compromised, then it cannot see the traffic from other HTTP servers or servlet servers. Further, rules provided to the firewall could prevent compromised servers accessing other servers on the DMZ or intranet, thereby preventing more corruption and theft. Oracle plans to explore such architectural alternatives for many of Oracle's products soon in a Firewall and Load Balancing white paper.
See Also:
For information about stateful firewalls and Checkpoint products, visit |
Attacks on corporate networks often attempt to trick components with high levels of privilege into revealing information or modifying internal infrastructure, thereby allowing further incursion into protected resources. This section presents best practices pertaining to the setting of program privileges, including usercode and password management
When assigning privileges to modules, use the lowest levels adequate to perform the module's functions.
This does not prevent trickery, but it limits the damage if a module is taken over or tricked. Faults are better contained.
Ensure that programs are reviewed against buffer overflow for received data.
Buffers can overflow into data structures, resulting in a number of exploitation types--especially denial of service. Buffer overflows are considered by many to be the leading area of security vulnerability and as such deserve special consideration.
Ensure that programs are reviewed against cross-site scripting attacks.
These attacks typically trick HTML and XML processing via input from browsers (or processes which act like browsers) to invoke scripting engines inappropriately. In one of the attack's basic forms, the attacker enters various escape characters such as >
or <
when presented a form for regular input via a browser. With careful crafting, the attacker can cause a script engine to process the attacker's script using the security level of the script processing engine (which may be quite high).
Consider a simple example. A form is presented to a browser requesting a description of a desired good or service. The attacker enters a bogus description along with >
and <
escape characters. The escape characters would later cause the output processor to process Javascript (entered as part of the bogus description) when the description was replayed. The script could read protected information from the server and then send it to the attacker via SMTP (e-mail) message.
No simple and effective solutions to this problem exist. Each application writer needs to write code to scan input to ensure that such trickery is not being attempted.
All U.S. built products containing cryptographic software should be reviewed by the building company's export control unit months in advance of production or Beta shipment outside the U.S. or Canada.
A one-time review by the U.S. Department of Commerce is required for all U.S. built products using encryption technology, if the products are to be shipped outside the U.S. and Canada. Legal penalties exist for exporting such products without proper license. This review, which can take many weeks, applies to all new products and may apply to patched or new versions of existing products as well.
In order to meet on-time delivery of products to the international market, it is essential that development consult with the corporate export control department months in advance of international shipment for both product Beta and production use. Corporate export control will advise development staff on the proper processes to ensure that appropriate licenses are obtained in time.
Where cryptographic functions are needed, use already reviewed facilities (that is, common code). This will keep the one-time review as short as possible. When shipping new versions of existing products, avoid changes to portions involving encryption if possible. This may eliminate delays for governmental review entirely.
When deploying software, change all default passwords and close accounts used for samples and examples.
The risks here should be clear.
Apply all relevant security patches.
Check Metalink (http://metalink.oracle.com/)
and TechNet (http://technet.oracle.com/index.html)
for current security alerts. Many of these patches address publicly announced security holes.
Remove unused services from all hosts.
Examples of unused services are FTP, SNMP, NFS, BOOTP, and NEWS. It is almost always worthwhile finding ways to eliminate FTP, because it is especially noxious. HTTP or WebDAV may be a good alternatives.
Limit the number of people with root privileges.
Disable the "r" commands, such as rlogon and rsh, if you do not need them.
See Also:
|
In this section, we explain why the Verisign 40-bit certificates are appropriate for use by nearly all organizations which desire bulk encryption protected by 112-bit, 128-bit, or 168-bit key sizes.
As a direct result of export rules in force until early 2000, Web browsers and server software containing encryption technology were often built in DOMESTIC and EXPORT versions--both official designations of the U.S. Department of Commerce. The DOMESTIC versions used strong encryption, and the EXPORT versions used weak encryption.
Many users outside the United States and Canada (and some domestic users as well) therefore currently have weak-encryption versions of popular browsers like Netscape Navigator and Microsoft Internet Explorer. In this context, weak encryption means key sizes of 64 bits or less (including 40-bit keys) for RC4 and DES bulk encryption. Strong encryption means key sizes greater than 64 bits, including the common 112-bit, 128-bit, and 168-bit key sizes.
Many organizations (especially in the financial, securities, medical, and insurance markets) believe that weak-encryption SSL communication is unacceptable for their applications. They have successfully lobbied for export laws and international agreements to allow single-use licenses for strong-encryption server products in their markets.
Unfortunately, use of strong-encryption server products with weak-encryption browsers results in weak-cryptography SSL sessions. So even with their licensed strong-encryption servers, these organizations would often be limited to unacceptable weak-encryption SSL sessions when communicating with their customers.
Verisign, the largest supplier of server certificates, responded to this problem by developing a Global Server ID (GSID) Certificate and getting approval for it from the U.S. Department of Commerce. The GSID certificate contains a signed digital right, which will be called the step-up digital right in this discussion.
In conjunction with development of the step-up digital right, browser and server logic supporting SSL was revised such that when a GSID certificate was used in a server, weak-encryption (that is, EXPORT version) client browsers would automatically step up their encryption strength from weak encryption to strong encryption.
Global Server IDs are now of limited utility, however, because the strong encryption export ban was lifted early in 2000. Now anyone outside the handful of countries designated as "terrorist" by the U.S. State Department can legally download a strong-encryption browser.
Configure your Web server to fail attempts to use weak encryption. Display an error page explaining the need to upgrade client browsers to 128-bit (strong encryption).
If you are considering the Verisign 128-bit certificate to ensure 128-bit encryption, then use the 40-bit certificate (or an equivalent certificate from another vendor) instead, and eliminate weak-encryption cipher suites from those allowed by the Web server. Attempts to use weak encryption will then fail, with the consequential display of an error page explaining the need to upgrade the browser to 128-bit (strong encryption).
Some service providers may see customer inconvenience issues in requiring a move up to 128-bit browser versions. But the move is really a win-win situation for the customer, because:
See Also:
http://www.verisign.com/
for general information about Verisign
http://www.verisign.com/products/site/secure/index.html
for information on Verisign Secure Site Services
http://www.verisign.com/products/site/secure/Secure-Site.pdf
for a discussion of differences between 128-bit and 40-bit certificates
The amount of CPU resources required to accommodate varying loads, both with and without security, is an important best practices issue. When developing applications where cryptography is required, it is important to get an early estimation regarding the CPU and other resources required for volume production systems. Applications are often developed without adding SSL until late in the development cycle, and unpleasant performance surprises frequently occur. Using HTTPS with SSL can increase CPU requirements by 10 to 100 times, compared to HTTP without SSL.
Reverse proxies and special HTTP-to-HTTPS conversion hardware appliances offer the prospect of significant performance gains by shifting SSL processing away from Web servers and Web caches. Oracle is currently investigating these options and will report on them in a later version of Oracle9i Application Server Best Practices.
Below are some rules of thumb for SSL in general, relevant for conventional Wintel and Sun processors of around 200 MHz. Because SSL can run on many different platform types in many different implementations, our predicted results should be taken as neither definitive nor particularly accurate for any particular Oracle product. Your results may vary by one half to one order of magnitude.
This assumes only one modest-sized (15K or less) HTTP or HTTPS message is sent every few minutes. When many pages are sent or received from a single browser in a small time period (two minutes or less), caching of bulk-encryption keys can reduce the performance difference between HTTP and HTTPS. But you should expect at least a 10-to-1 slowdown when using HTTPS for normal traffic, and you should always test reasonable load scenarios.
Performance test applications during development.
Performance testing with SSL early in the development cycle is prudent. Test during the prototype or feasibility stages if possible. Testing should emulate volume production.
Ensure that sequential HTTPS transfers are requested of the same Web server.
Expect several hundred milliseconds to be required to initiate SSL sessions on a 300 MHz machine. Most of this CPU time is spent in the key exchange logic, where the bulk-encryption key is exchanged. Caching the bulk-encryption key will significantly reduce CPU overhead on subsequent accesses, provided that the accesses are routed to the same Web server.
Keep secure pages and pages not requiring security on separate virtual servers.
While it may be easier to place all pages for an application on one HTTPS virtual server, the performance cost is very high. Reserve your HTTPS virtual server for pages needing SSL and put the pages not needing SSL on an HTTP virtual server.
If secure pages are composed of many gif, jpeg, or other files that would be displayed on the same screen, then it is probably not worth the effort to segregate secure from non-secure static content. The SSL key exchange (the major consumer of CPU cycles) is likely to be called exactly once in any case, and the overhead of bulk encryption is not that high.
See Also:
|
In the near future, the identity of browser users will be authenticated more and more frequently by client certificates which comply with ITU X.509 version 3 specification. Client certificates contain the following information:
Certificates are issued by a certificate authority (CA). Certificates are often issued in-house by large companies, but CA services can also be outsourced. As of January 2001, Verisign is the largest outsourcing agency of certificates, with about 90% of the market. Other players in the market include Thawte, EnTrust, and GTE Cybertrust (part of Baltimore Technologies).
CA standard practice is first to issue certificates and then to keep revocation lists. This allows certificate validation to be handled in the same manner as credit card validation. That is, a certificate is presumed valid if both of these are true:
Revocation lists are usually maintained at a central server managed by the CA. They can also be replicated or partially replicated, depending on the security policies of the CA.
Certificates are relatively new instruments for providing user authentication, and firms may be tempted to build policies for them similar to those already in place for usercodes. The traditional usercode model works well for intranets, but it may work poorly in the evolving world of outsourced Internet services. If each service provider were to create its own certificate building rules, then users would be forced to have many different certificates, one for each of the provided services.
It would be very difficult for users to manage multiple certificates within one organization with the infrastructure currently available. The difficulty level would rise even higher if multiple certificates were required for a single transaction (because it included services from a number of organizations). When creating and managing certificates, assume that users will have one certificate for personal use and at most one for each organization where they work.
The Oracle HTTP Server has facilities for certificate handling, including revocation lists. The Oracle HTTP Server allows use of certificates for authentication via SSL and for authorization using the Oracle HTTP Server's URL Wildcard scheme. URL Wildcards are specified with an authorization that can depend upon any of the fields of authenticated certificates. Certificates received via the SSL mechanism encounter a series of checkpoints, each of which must be passed before proceeding to the next:
Each URL Wildcard has a list of allowable valid certificate patterns. A certificate pattern can include any of the fields of a certificate. This could include the issuer, the organizational unit of the owner, the owner's name, or fragments of any of these fields.
Authentication using X.509 certificates (rather than usercode/password) offers several interesting possibilities, especially when viewed from the perspective of the relatively new market of outsourced CA services:
Each user (whose identity would include organizational unit and issuer fields) would be explicitly entered in access control lists. Membership in such lists would be required to allow access to protected services.
Thus, Gartner might allow anyone at Oracle to access their market research information, even though they might not specifically know that the person with a particular distinguished name was an Oracle employee. Also, Oracle might allow anyone whose issuer was "Oracle" to have access to certain Oracle services.
It could be used, for example, only to exclude certificates whose private keys were compromised or for people who left an organization. On the other hand, it might be used instead to remove both invalid users (compromised keys and those who leave) and otherwise valid users (employees on probation). There is nothing which prevents organizations from adding and deleting certificates from revocation lists.
Ensure that certificate organization unit plus issuer fields uniquely identify the organization across the Internet.
One way to accomplish this would be to include the Dun and Bradstreet or IRS identification as identification for the issuer and the organizational unit within the certificate.
Ensure that certificate issuer plus distinguished name uniquely identify the user.
If the combination of issuer and distinguished name is used as identification, then there must be no chance of duplication. Note that authentication based on the public key may be a poor idea, because you would have to revoke it if the private key were compromised. Public key authentication is risky even if you expect to use the certificate only within an intranet, because you may later decide to outsource services.
Include expiring certificates in tests of applications using certificates.
Expiration is an important consideration for a number of reasons. Unlike most usercode-based systems, certificates expire automatically. The expiration period is typically (but not necessarily) one year. With longer-duration certificates, fewer reissues are required, but revocation lists become larger.
Expiring certificates can become time bombs of bugs, if they are not included in tests of application systems using certificates. Consider the following examples:
In systems where certificates replace traditional usercodes, these situations may result in unexpected bugs. Careful consideration of the effects of expiration is required. You will need to develop new policies, because most application and infrastructure developers have not yet worked in systems where authorization might change during transactions.
Use certificate reissues to update certificate information.
Because certificates expire, infrastructure for updating expired certificates will be required. Take advantage of the reissue to update organizational unit or other fields.
In cases of mergers, acquisitions, or status changes of individual certificate holders, consider reissue even when the certificate has not yet expired. But pay attention to key management. If the certificate for a particular person is updated before it expires, for example, then should the old certificate automatically be put on the revocation list?
Audit certificate revocations.
Revocation audit trails can help you reconstruct the past when necessary. An important example is replay of a transaction to ensure the same results on the replay as during the original processing. If the certificate of a transaction participant was revoked between the original and the replay, then the audit trail enables certificate "unrevocation".
For years U.S. developers have had to cope with building DOMESTIC and EXPORT versions of products containing cryptographic technology. In many cases, getting products through the complex process of certification for export was so difficult that it limited other areas of functionality as well.
Changes to U.S. export law in early 2000 eliminated the requirement for EXPORT and DOMESTIC versions of products containing cryptographic functions. But each new version of software containing cryptographic functions must still have a one-time U.S. government review before shipment outside the U.S. and Canada.
As of June 2001, rules for products containing encryption are roughly:
Examples of cryptographic functions that can be used without restriction or license include digital signatures and authentication technology in general. These functions are exempt because they could not easily be adapted for keeping the content of messages or data confidential.
These reviews generally take six weeks or less if new cryptographic techniques are not employed. Using encryption technology such as SSL (which has already been reviewed) will usually shorten the review. Developing or using a new encryption technology or a new usage of existing technology can lengthen the reviews to many months.
See Also:
|
|
Copyright © 2001 Oracle Corporation. All Rights Reserved. |
|