21 Best Practices for Using Compute Classic

As you create and manage instances and the associated resources in Compute Classic, consider the following guidelines and recommendations to get the best out of the service in terms of cost, manageability, and performance.

Managing Service Users and Roles

  • Only users with the Compute_Operations role can perform write operations (that is, create, update, and delete resources) in Compute Classic. When you create users in Oracle Cloud Infrastructure Classic Console, assign the Compute_Operations role to only those users who'll be responsible for creating, updating, and deleting instances and the associated storage and networking resources.

  • For business continuity, consider creating at least two users with the Compute_Operations role. These users must be IT system administrators in your organization.

Building Private Images

  • The operating system and software that you use to build private images must have the required licenses. You’re responsible for purchasing the required licenses and ensuring support for any third-party operating systems and software that you run on Compute Classic instances.

  • Plan the packages that you want to include in your images keeping in mind the workload that you want to deploy.

  • Before creating the image file, plan ahead and provision any users that you'd like to be available when instances are created using the image.

    Note:

    While creating instances, you can specify one or more SSH public keys.

    The keys that you specify are stored as metadata on the instance. This metadata can be accessed from within the instance at http://192.0.0.192/{version}/meta-data/public-keys/{index}/openssh-key.
    • Oracle-provided images include a script that runs automatically when the instance starts, retrieves the keys, and adds them to the authorized_keys file of the opc user.

    • In images that you build, you can write and include a script that runs automatically when the instance starts, retrieves the SSH public keys, and adds the keys to the authorized_keys file of the appropriate users.

  • Before creating the final image file, apply the necessary security patches and review the security configuration.

  • Keep your image disk size just as small as is essential. A large image requires more time to be uploaded to Oracle Cloud Infrastructure Object Storage Classic, and costs more to store. In addition, creating instances and bootable storage volumes from a large image requires more time. Before uploading image files to Oracle Cloud Infrastructure Object Storage Classic, make them sparse files. On Linux, you can convert a file to the sparse format by running the command, cp --sparse=always original_file sparse_file. And when creating the tar archive, to ensure that the tar utility stores the sparse file appropriately, specify the -S option.

  • Choose a tar.gz file name that you can use later to easily identify the key characteristics of the image, such as the OS name, OS version, and the disk size. For example, for a root-disabled, Oracle Linux 6.6 image with a 20-GB disk, consider using a file name such as OL66_20GB_RD.tar.gz.

Securing the Operating System on Your Instances

To ensure that Compute Classic instances provide a resilient platform for your workloads, make sure that the latest security patches are applied to the operating system running on the instances. In addition, before deploying applications on an instance, review the security configuration of the operating system and verify that it complies with your security policies and standards.

  • For private images (that is, images that you create and use), apply the necessary security patches and review the security configuration before creating the image file.

  • For Oracle-provided images, apply the necessary security patches and review the security configuration right after you create the instances, before deploying any applications.

For security and patching-related guidelines, see the documentation for your operating system. The following are a few useful references:

Naming Objects

When you create instances, storage volumes, security lists, and so on, select the name of the object carefully. Pick a name that helps you quickly identify the key characteristics of the object later. For example, when creating a bootable storage volume, consider including the operating system name and the image disk size in the name of the storage volume.

Selecting Shapes

  • While selecting the shape for an instance, consider the nature of the applications that you plan to deploy on the instance, the number of users that you expect to use the applications, and also how you expect the load to scale in the future. Remember to also factor in the CPU and memory resources that are necessary for the operating system.

  • Select a shape that meets the requirements of your workload with a sufficient buffer for intermittent spikes in the load. If you’re not sure what shape is appropriate for an instance, then start small, experiment with a representative workload, and then settle on a shape. This approach may help you achieve an optimal trade-off between resource allocation and performance.

Using Orchestrations to Automate Resource Provisioning

  • When building orchestrations to create and manage instances, set the high-availability policy to active, to ensure minimal disruption to your operations.

  • Using orchestrations, you can control the placement of instances. You can opt to have instances placed on the same or on different physical nodes. When you use the instance placement feature, consider your requirements for application isolation and affinity. See Relationships Between Objects Within a Launch Plan Object.

  • If you want to shut down an instance but let other instances in the same orchestration run, then don’t terminate the orchestration but update the instance and specify the desired state as shutdown for just the required instance.

  • Don’t define storage volumes and instances in the same orchestration. By keeping storage volumes and instances in separate orchestrations, you can shut down and start the instances when required and yet preserve the attached storage volumes. Note that the recommendation here is to define the storage volumes outside the instance orchestration. To ensure that the storage volumes remain attached after an instance is re-created, you must define the storage attachments within the instance orchestration.

  • When you create an instance using the Create Instance wizard, a single orchestration v2 is created automatically to manage the instance and its associated resources. Storage volumes and networking objects used by the instance are created in the same orchestration. Instances are nonpersistent by default. However, storage volumes and other objects are created with persistence set to true, so that if you suspend the orchestration, instances are shut down, but storage volumes aren’t deleted. Terminating the orchestration, however, will cause all objects to be deleted and any data on storage volumes will be lost.

  • Earlier, when you created an instance using the Create Instance wizard, one or more orchestrations v1 were created automatically to manage the instance and its associated resources. For example, if you used the Create Instance wizard to create an instance and attach a new storage volume to it, then two separate orchestrations were created, one for the instance and the other for the storage volume. A master orchestration was also created, and the instance and storage volume orchestrations were referenced as objects in the master orchestration.

    Starting or terminating a master orchestration allows you to start or terminate all the nested orchestrations. This is an easy way to handle dependencies across orchestrations. Remember, though, that if your master orchestration references any orchestration that creates storage volumes, then terminating the master orchestration will delete the storage volumes and all the data on them.

    If you want to delete an instance but retain the storage volumes that were created while creating the instance, then terminate only the instance orchestration and let the storage volume orchestration remain in the Ready state.

Managing Storage

  • When you decide the number and size of your storage volumes, consider the limits: minimum 1 GB, maximum 2 TB, one-GB increments, and 10 volumes per instance.
    • If you attach too many small storage volumes to an instance, then you may not be able to scale block storage for the instance up to the full limit of 20 TB.

    • If you attach many large volumes to an instance, then the opportunities to spread and isolate storage are limited. In addition, too many large volumes may result in lower overall utilization of block storage space, particularly if data isolation is also critical for your business.

    You can increase the size of a storage volume after creating it, even if the storage volume is attached to an instance. See Increasing the Size of a Storage Volume. However, you can’t reduce the size of a storage volume after you’ve created it. So ensure that you don’t overestimate your storage requirement.

    Consider the storage capacity needs of the applications that you plan to deploy on the instance, and leave some room for attaching more storage volumes in the future. This approach helps you use the available block storage capacity efficiently in the long run.

  • To provide highly scalable and shared storage in the cloud over NFSv4 for your instances, consider using Oracle Cloud Infrastructure Storage Software Appliance – Cloud Distribution. This appliance is provisioned on a Compute Classic instance and plays the role of a file server in the cloud. It provides shared, highly scalable, low-cost, and reliable storage capacity in Oracle Cloud Infrastructure Object Storage Classic for your Compute Classic instances running Oracle Linux. For information about the use cases that the appliance is best suited for, see About Oracle Cloud Infrastructure Storage Software Appliance– Cloud Distribution in Using Oracle Cloud Infrastructure Storage Software Appliance.

  • Create and use separate storage volumes for your applications, data, and the operating system. Use a configuration management framework such as Chef or Puppet for managing the configuration of the operating system and applications.

  • To ensure that storage volumes remain attached and mounted after instances are deleted and re-created, do both of the following:
    • Define the storage attachments within the orchestration that you use to create instances. Note that the recommendation here is to define the storage attachments, and not the storage volumes, in the orchestration that you use to create instances.

    • Set up the instance to boot from a bootable storage volume.

  • If you’re sure that a storage volume is no longer required, then back up the data elsewhere and delete the storage volume.

Configuring Network Settings

  • When you create an instance, if you opt for an autogenerated public IP address, then the IP address so allocated persists only during the life of the instance. If the instance is deleted and re-created by terminating and starting its orchestration, then the instance gets a new public IP address. To assign a fixed public IP address to an instance, reserve a public IP address, and attach it to the instance—either when you create the instance or, later, by updating the IP reservation.

  • If you’ve created an IP reservation and you no longer need it, delete it.

  • If a persistent public IP address is associated with an instance during instance creation, then if required, you can remove that IP address from the instance later on. Ensure, however, that you don’t delete this IP reservation. If you delete and re-create the instance, the IP reservation will be required again. If you’ve deleted the IP reservation, you won’t be able to re-create the instance.

  • You can attach an instance to a maximum of five security lists, and you can use a security list as the source or destination in up to 10 security rules. Plan your security lists and security rules keeping these overall limits in mind.

    Note:

    If an instance is added to multiple security lists that have different policies, then the most restrictive policy is applicable to the instance.

  • Plan your IP network keeping the following overall limits in mind.

    • As a best practice, add a maximum of 20 IP networks to an IP network exchange. Due to DHCP limitations routing is automatically configured for only for 20 IP networks in an IP network exchange. If you want to add more than 20 IP networks to an IP network network, you’ll need to manage routing in each instance manually.

    • The prefix length of the IP address prefix that you specify in an IP network should be between /16 to /30.

    • The maximum number of IP address prefixes that you can specify in an IP address prefix set is limited to 2047.

    • In a security rule, you can specify a maximum of 32 security protocols, 32 source IP address prefix sets, and 32 destination IP address prefix sets.

    • In a security protocol, you can specify a maximum of 32 port numbers or port range strings for Source Port Set and Destination Port Set.

    • In a vNICset, you can specify a maximum of 32000 vNICs and 256 access control lists (ACLs).

Ensuring Secure Access to Instances

  • Ensure instance isolation by creating security lists and adding instances to the appropriate security lists. Instances within a security list can inter-communicate freely over any protocol. To allow incoming traffic to all the instances in a security list, set up a security rule with the security list as the destination and with the required source and protocol settings.

  • Use security rules carefully and open only a minimal and essential set of ports. Keep in mind your business needs and the IT security policies of your organization.

  • When you add an instance to a security list, all the security rules that use that security list—as either the source or destination—are applicable to the instance. Consider a security list that is the destination in two security rules, one rule that allows SSH access from the public Internet and another rule permitting HTTPS traffic from the public Internet. When you add an instance to this security list, the instance is accessible from the public Internet over both SSH and HTTPS. Keep this in mind when you decide the security lists that you want to add an instance to.

  • If you’re creating a Linux or Oracle Solaris instance, then try to determine, up front, how many users you expect to access the instance and plan for a separate SSH key pair for each user.

  • Using the Web Console, you can associate a maximum of 10 SSH keys with your instance.

  • Keep your SSH keys secure. Lay down policies to ensure that the keys aren’t lost or compromised when employees leave the organization or move to other departments. If you lose your private key, then you can’t access your instances. For business continuity, ensure that the SSH keys of at least two IT system administrators are added to your instances.

  • If you need to edit the ~/.ssh/authorized_keys file of a user on your instance, then before you make any changes to the file, start a second ssh session and ensure that it remains connected while you edit the authorized_keys file. This second ssh session serves as a backup. If the authorized_keys file gets corrupted or you inadvertently make changes that result in your getting locked out of the instance, then you can use the backup ssh session to fix or revert the changes. Before closing the backup ssh session, test the changes you made in the authorized_keys file by logging in with the new or updated SSH key.

Configuring Third-Party Devices When Setting Up a VPN Connection

It is recommended that you consider the following suggestions while configuring your third-party device for a VPN connection.
  • Configuration Information

    Use the following IPSec configuration for policy-based VPN:

    • Authentication: Pre-shared keys

    • Encryption: 3DES, AES-128, AES-192, AES-256

    • Hash: MD5, SHA-1, SHA-2

    • Policy Group: Diffie-Hellman groups supported are 2, 5, 14, 15, 16, 17, 18, 22, 23, 24

    • ISAKMP: IKEv1 only. If IKEv2 is enabled by default, turn it off.

    • Exchange type: Main Mode (The Cloud gateway uses main mode in phase one negotiations)

    • IPSec protocol: ESP, tunnel-mode

    • PFS: Enabled

    • IPSec SA session key lifetime default: 28,800 seconds (8 hours); 3,600 seconds (1 hour) on Cisco devices

    • IKE session key lifetime default: 3,600 seconds (1 hour); 86400 seconds (24 hours) on Cisco devices

  • General and Debug Information 

    • It is highly recommended that the third-party device be configured to be responder-only.

    • The third-party device must support and be configured for policy-based VPN.

    • The Cloud gateway uses IPSec and is behind a NAT, so network address translation traversal (NAT-T) is required. The third-party device must support NAT-T. NAT-T requires UDP port 4500 to be open.
    • Avoid setting up numerous IP networks with a /32 subnet. Instead, use a smaller number of IP networks with larger subnets. If you create a very large number of IP networks, a large number of IPSec security associations are required, which could cause performance degradation on some third-party devices.
    • Ensure the IKE and IPSec timeouts on the Cloud gateway and the third-party device are the same.

    • For Phase 1, ensure that the IKE ID on the Cloud gateway and the third-party device match. 

    • Check each security application on the third-party device to ensure that idle timeouts and traffic volume limits are reasonable.

    • After a VPN connection is set up, if you can connect to instances on some subnets but not on others, check that both gateways have the correct set of subnets configured. If the third-party device has some subnets configured that aren’t on the Cloud gateway, the Cloud gateway won’t report an error. However, if the Cloud gateway has some subnets configured that aren’t on the third-party device, it might result in a flapping tunnel.

  • HA Information

    • When HA is configured, Dead Peer Detection (DPD) must be enabled to detect when a tunnel is down.

    • When HA is configured, asymmetric routing across the tunnels that make up the VPN connection will occur. Ensure that your firewall is configured to support this. If not, traffic will not be routed reliably.

    • Switching tunnels might take 30–40 seconds.