Skip Navigation Links | |
Exit Print View | |
![]() |
Oracle Solaris 10 9/10 Installation Guide: Solaris Live Upgrade and Upgrade Planning |
Part I Upgrading With Solaris Live Upgrade
1. Where to Find Solaris Installation Planning Information
2. Solaris Live Upgrade (Overview)
3. Solaris Live Upgrade (Planning)
Solaris Live Upgrade Requirements
Solaris Live Upgrade System Requirements
Installing Solaris Live Upgrade
Solaris Live Upgrade Disk Space Requirements
Solaris Live Upgrade Requirements if Creating RAID-1 Volumes (Mirrors)
Upgrading a System With Packages or Patches
Upgrading and Patching Limitations
Guidelines for Selecting Slices for File Systems
Guidelines for Selecting a Slice for the root (/) File System
Guidelines for Selecting Slices for Mirrored File Systems
General Guidelines When Creating RAID-1 Volumes (Mirrored) File Systems
Guidelines for Selecting a Slice for a Swap Volume
Configuring Swap for the New Boot Environment
Failed Boot Environment Creation if Swap is in Use
Guidelines for Selecting Slices for Shareable File Systems
Customizing a New Boot Environment's Content
Synchronizing Files Between Boot Environments
Adding Files to the /etc/lu/synclist
Forcing a Synchronization Between Boot Environments
Booting Multiple Boot Environments
Solaris Live Upgrade Character User Interface
4. Using Solaris Live Upgrade to Create a Boot Environment (Tasks)
5. Upgrading With Solaris Live Upgrade (Tasks)
6. Failure Recovery: Falling Back to the Original Boot Environment (Tasks)
7. Maintaining Solaris Live Upgrade Boot Environments (Tasks)
8. Upgrading the Solaris OS on a System With Non-Global Zones Installed
9. Solaris Live Upgrade (Examples)
10. Solaris Live Upgrade (Command Reference)
Part II Upgrading and Migrating With Solaris Live Upgrade to a ZFS Root Pool
11. Solaris Live Upgrade and ZFS (Overview)
12. Solaris Live Upgrade for ZFS (Planning)
13. Creating a Boot Environment for ZFS Root Pools
14. Solaris Live Upgrade For ZFS With Non-Global Zones Installed
B. Additional SVR4 Packaging Requirements (Reference)
The lucreate -m option specifies which file systems and the number of file systems to be created in the new boot environment. You must specify the exact number of file systems you want to create by repeating this option. When using the -m option to create file systems, follow these guidelines:
You must specify one -m option for the root (/) file system for the new boot environment. If you run lucreate without the -m option, the Configuration menu is displayed. The Configuration menu enables you to customize the new boot environment by redirecting files onto new mount points.
Any critical file systems that exist in the current boot environment and that are not specified in a -m option are merged into the next highest-level file system created.
Only the file systems that are specified by the -m option are created on the new boot environment. To create the same number of files systems that is on your current system, you must specify one -m option for each file system to be created.
For example, a single use of the -m option specifies where to put all the file systems. You merge all the file systems from the original boot environment into the one file system that is specified by the -m option. If you specify the -m option twice, you create two file systems. If you have file systems for root (/), /opt, and /var, you would use one -m option for each file system on the new boot environment.
Do not duplicate a mount point. For example, you cannot have two root (/) file systems.