OCA/OCP Oracle Database 11g All-in-One Exam Guide 746 12. þ A and B. Both these approaches are reasonable. ý C and D. C is impossible. D would work, but at the price of losing your current table. 13. þ E. Flashback drop is not applicable to any of these scenarios. ý A, B, C, and D. A will not put anything in the recycle bin, B will remove the table from the recycle bin. C bypasses the recycle bin. D is wrong because even though indexes do go into the recycle bin when you drop their table, they do not go in there if they are dropped individually. 14. þ C. Flashback drop will recover the most recent table. ý A, B and D. If tables have the same original name, they are recovered on a last-in-first-out basis. 15. þ C. Table flashbacks are implemented as one transaction, which (like any other transaction) will be rolled back if it hits a problem. ý A, B, and D. A and B are wrong because a table flashback must succeed in total, or not at all. D would be a way around the difficulty, but it is certainly not a requirement for flashback, and probably not a good idea. 16. þ C. This is the only way to guarantee success. ý A, B, and D. A and B could both fail, depending on the nature of the changes. D confuses Flashback Table, when foreign key constraints are maintained, with Flashback Drop, where they are dropped. 17. þ D. It is only Flashback Table that requires row movement. ý A, B, and C. A is wrong because the object number is preserved through the DROP and FLASHBACK . . . TO BEFORE DROP operations. B is wrong because ROWIDs always refer to the table the row is a part of. C is wrong because flashback of one transaction can proceed without row movement being required. 18. þ A. The FBDA manages the contents of Flashback Data Archives. ý B, C, and D. RVWR writes to the flashback logs; CTWR records changed blocks addresses for RMAN fast incremental backups; the ARCn process(es) write archive log files. 19. þ A, B and D. A and B would remove necessary information from the data dictionary, and D would not generate the undo needed for archiving. ý C and E. Columns can be added to archive-enabled tables. CHAPTER 20 Automatic Storage Management Exam Objectives In this chapter you will learn how to • 053.1.1 Describe Automatic Storage Management (ASM) • 053.1.2 Set Up Initialization Parameter Files for ASM and Database Instances • 053.1.3 Start Up and Shut Down ASM Instances • 053.1.4 Administer ASM Disk Groups 747 OCA/OCP Oracle Database 11g All-in-One Exam Guide 748 Automatic Storage Management, or ASM, is a facility provided with the Oracle database for managing your disks. It is an Oracle-aware logical volume manager, or LVM, that can stripe and mirror database files and recovery files across a number of physical devices. This is an area where the database administration domain overlaps with the system administration domain. Many databases will not use ASM: they will store their files on the volumes provided by the operating system, which may well be managed by an LVM. But if you do not have a proper LVM, as will probably be the case with low-end systems running on, for example, Linux or Windows, then ASM provides an excellent (and bundled) alternative to purchasing and installing one. On high-end systems, ASM can work with whatever LVM is provided by the operating system, or can obviate the requirement for one. In all cases, ASM has been proven to deliver spectacular performance and should always be considered as a viable option for database storage. Before going into the details of ASM architecture and configuration, the following is a brief discussion of logical and physical volume management. This is not intended to be any sort of comprehensive treatment (which is not necessary for the OCP exam) but rather the minimum information needed to appreciate the purpose of ASM. The Purpose of a Logical Volume Manager Your database server machine will have one or more disks, either internal to the computer or in external disk arrays. These disks are the physical volumes. In virtually all modern computer installations, there is a layer of abstraction between these physical volumes and the logical volumes. Logical volumes are virtual disks, or file systems, that are visible to application software, such as the Oracle database. Physical volumes are presented to the software as logical volumes by the operating system’s LVM. Even the simplest computer nowadays will probably be using some sort of LVM, though it may be extremely limited in its capabilities. In the case of a Windows PC, you may well have only one disk, partitioned into two logical drives: perhaps a C: drive formatted with the FAT32 file system, and a D: drive formatted with the NTFS file system. Thus one physical volume is presented to you by Windows as two logical volumes. Larger installations may have dozens or hundreds of disks. By using an LVM to put these physical volumes into arrays that can be treated as one huge disk area and then partitioned into as many (or as few) logical volumes as you want, your system administrators can provide logical volumes of whatever size, performance, and fault tolerance is appropriate. RAID Levels If the physical volumes are mapped one-to-one onto logical volumes, the performance and fault tolerance of the logical volumes is exactly that of the physical volumes. RAID (or redundant array of inexpensive discs—not a term that has any meaning nowadays) in its various levels is intended to enhance performance and fault tolerance by exploiting the presence of multiple physical volumes. There are four levels to consider in most environments. Chapter 20: Automatic Storage Management 749 PART III RAID level 0 is optimal for performance but suboptimal for fault tolerance. A RAID 0 array consists of one or more logical volumes cut (or striped) across two or more physical volumes. Theoretically, this will improve logical disk I/O rates and decrease fault tolerance by a proportion equal to the number of physical volumes. For example, if the array consists of four disks, then it will be possible to read from and write to all of them concurrently; a given amount of data can be transferred to the logical volume in a quarter of the time it would take if the logical volume were on only one disk. But if any of the four disks is damaged, the logical volume will be affected, so it is four times more likely to fail than if it were on only one disk. RAID level 1 is optimal for fault tolerance. There may be performance gains, but that is not why you use it. Where RAID 1 is definitely suboptimal is cost. A RAID 1 array consists of one or more logical volumes mirrored across two or more disks: whenever data is written to the logical volume, copies of it will be written concurrently to two or more physical volumes. If any one physical volume is lost, the logical volume will survive because all data on the lost physical volume is available on another physical volume. There may be a performance improvement for read operations if it is possible to read different data from the mirrors concurrently; this will depend on the capabilities of the LVM. The cost problem is simply that you will require double the disk capacity—more than double, if you want a higher degree of mirroring. In the four-disk example, the logical volume will be equivalent in size to only two of the physical disks, but you can lose any one disk, and possibly two disks, before the logical volume is damaged. RAID level 5 is a compromise between the performance of RAID 0 and the fault tolerance of RAID 1. The logical volume is cut across multiple physical volumes (so concurrent read and writes are possible), and a checksumming algorithm writes out enough information to allow reconstruction of the data on any one physical volume, if it gets damaged. Thus you do not get all the performance gain of RAID 0, because of the checksumming overhead; in particular, write operations can be slow because each write operation needs to calculate the checksum before the write can take place. You do not get all the fault tolerance of RAID 1, because you can survive the loss of only one disk. In the four-disk example, the logical volume will be equivalent in size to three physical volumes, and if any one of the four is lost, the logical volume will survive. RAID 0+1 is optimal for both fault tolerance and performance: you mirror your striped disks. In the four-disk example, your system administrators would create one logical volume striped across two of the physical disks and mirror this to the other two disks. This should result in double the performance and double the safety (and double the price) of mapping one logical volume directly onto one physical volume. Volume Sizes Physical volumes have size restrictions. A disk is a certain size, and this cannot be changed. Logical volumes may have no size restrictions at all. If your LVM allows you to put a hundred disks of 100GB each into a RAID 0 array, then your logical volume will be 10TB big. (It will also perform superbly, but it will not be very tolerant against disk failure.) Furthermore, logical volumes can usually be resized at will, while the system is running. OCA/OCP Oracle Database 11g All-in-One Exam Guide 750 Choice of RAID Level Many system administrators will put all their physical volumes into RAID 5 arrays. This is simple to do and provides a certain amount of fault tolerance and perhaps a small performance improvement, but this may not be the best practice for an Oracle database. As the DBA, you should take control of the RAID strategy and apply different levels to different file types. Some files are critical to the database remaining open: the files that make up the SYSTEM tablespace, the active UNDO tablespace, and the controlfile copies. Damage to any of these will cause the database to crash. Some files are critical for performance: the online redo log files and the controlfiles. I/O on these can be a serious bottleneck. By considering the characteristics of each file, a RAID strategy should become apparent. For example, the SYSTEM and UNDO tablespaces should be on RAID 1 volumes, so that they will always be available. The online redo logs are protected by multiplexing, so you don’t have to worry about hardware fault tolerance, but performance is vital, so put them on RAID 0 volumes. The controlfile copies are critical for performance, but if any copy is damaged, the database will crash; thus, RAID 0+1 could be appropriate. Your other datafiles could perhaps go on RAID 5 volumes, unless they are particularly important or volatile. ASM Compared with Third-Party LVMs ASM has a huge advantage over other logical volume managers: it is aware of the nature of Oracle database files, and it performs striping and mirroring at the file level, not the volume level. This means it can make more intelligent decisions about how to manage the files than a third-party product. First, when a logical volume is cut across several physical volumes in what are called “stripes,” a decision must be made on the size of the stripe. Different file types will perform better with different stripe sizes: ASM is aware of this and will stripe them appropriately. Second, ASM can handle files individually, whereas all other LVMs work at the volume level: they are not aware of the files within the volume. So with a third-party LVM, you have to specify RAID attributes per volume. ASM can specify the attributes per file, so you can, for instance, have three-way mirroring for your SYSTEM tablespace datafiles but no mirroring at all for your temporary tablespaces’ tempfiles, all within the same logical volume. Third, ASM is in principle the same on all platforms, and it is bundled with the database. You do not have to learn (and perhaps purchase) different volume managers for different platforms. Any configuration you do is portable, within the limits of device naming conventions. Fourth, there is the question of availability. Some operating systems come with an LVM as standard. With AIX on IBM hardware, for example, use of the LVM is not an option—it is compulsory. With other vendors’ operating systems, the LVM may be a separately licensed option, or there may not be one at all—you will have to buy a third-party product. ASM is always available and should bring significant performance and manageability benefits to systems that do not have an LVM; on those that do, it Chapter 20: Automatic Storage Management 751 PART III will add an extra, Oracle-aware, layer to space management that will further enhance performance while reducing the management workload. The ASM Architecture Implementing ASM requires a change in the instance architecture. It even requires another instance. There is an instance parameter: INSTANCE_TYPE, which defaults to RDBMS. An RDBMS instance is a normal instance, used for opening a database and accepting user sessions. Setting this parameter to ASM will start an Automatic Storage Management instance, which is very different. An ASM instance is not used by end users; it controls access to ASM files stored on ASM disk groups, on behalf of the RDBMS instances. These files are functionally the same as non-ASM database files: they are datafiles, controlfiles, log files, and recovery files, but they are stored in the ASM logical volume manager environment, not in the file systems provided by your operating system. The Oracle cluster services are needed on the host in order to set up communication between the RDBMS instances and the ASM instance. The cluster services are the same services used to enable a RAC (or Real Application Clusters) environment, where several instances on several hosts open a shared database. The Cluster Synchronization Service Oracle Corporation provides a suite of clusterware. This is intended for setting up a RAC: a group of instances on several machines opening a database on shared disks. Oracle clusterware can be used as a replacement for clusterware that you would otherwise buy from your hardware and operating system vendor. However, the cluster services are also used in a single-instance environment to set up the communications between ASM and RDBMS instances. EXAM TIP ASM is not required for RAC (because you can use a third-party clustered volume manager for the shared database files), nor is it only for RAC (because it works very well for single-instance, nonclustered databases too). In a RAC environment, the cluster services require a separate installation and will run off their own Oracle home. In a single-instance environment, the small part of the cluster services that is needed to enable ASM is installed into and runs from the database Oracle home; this is the CSSD, or Cluster Services Synchronization Daemon. Under Windows, the CSSD runs as a service; on Unix, it is a daemon launched by an entry in the /etc/inittab file. The ASM Disks and Disk Groups An ASM disk group is a pool of ASM disks managed as one logical unit. As with any other LVM, ASM takes a number of physical volumes and presents them to Oracle as one or more logical volumes. The physical volumes can be actual disks or partitions of disks, or they can be volumes managed by a volume manager that is part of your OCA/OCP Oracle Database 11g All-in-One Exam Guide 752 operating system. Either way, they will not be formatted with any file system; they must be raw devices. ASM will take the raw devices and put them into a number of ASM disk groups. A disk group is the logical volume. EXAM TIP ASM disks must be raw devices, without a file system, but they do not need to be actual disks. They can be disks, partitions of a disk, or logical volumes managed by an LVM. TIP It is possible to set up ASM using files instead of raw disks, but this is totally unsupported and suitable only for training or demonstration systems. For example, on a Linux system you might have six SCSI disks of 72GB each. You could decide to use one of them, /dev/sda, for the root file system and utilities. Then use /dev/sdb for the $ORACLE_HOME directories, and then /dev/sdc, /dev/sdd, /dev/ sde, and /dev/sdf for the database files. You would create a file system on each disk and format it—probably as ext3—and then mount the file systems onto directory mount points in the root file system. This is all very well, but you are wasting the performance potential of the machine. It will be extremely difficult to balance the I/O evenly across the four disks used for the database, and you will have to monitor which files have the most activity and try to keep them separate. Also, one disk may fill up while the others have plenty of space. The equivalent Windows example would be drive C: for Windows itself, and drive D: for the ORACLE_HOME. Then drives E:, F:, G:, and H: would be dedicated to database files. Probably all the disks would be formatted as NTFS. If you were to put the four disks dedicated to the database into one RAID 0 logical volume, you would get better performance, and a system that requires much less monitoring and management. But it may be that you do not have a logical volume manager. Enter ASM . . . . To use ASM, you would not format the four disks to be used for database files. The root file system and the ORACLE_HOME file system must be managed as normal; you cannot use ASM volumes for anything other than database and recovery files. Then you would launch an ASM instance and set instance parameters such that it will find the four raw volumes and place them into one ASM disk group. This group will contain the database, with whatever RAID characteristics you want. EXAM TIP You can use ASM only for database and recovery files, not for your Oracle Home or for anything else. The definition of “database file” is quite broad, but does not include trace files, the alert log, the password file, or a static parameter file. The size of the ASM disk group is the sum of the size of the ASM disks less a small amount, but depending on what degree of fault tolerance is specified, the size available for use will be less. The default fault tolerance is single mirror, meaning that, to continue our example, the end result will be close to 144GB of space available for the database, Chapter 20: Automatic Storage Management 753 PART III with single mirrors for fault tolerance and four times the native disk performance. This degree of mirroring can be changed for the whole group, or for individual files within it; the striping is automatic and cannot be disabled. EXAM TIP ASM mirroring defaults to single mirror, but can set to none or double; striping cannot be disabled. Disks can be added and removed from a disk group dynamically, within certain limits. In general, if the operating system and hardware can handle adding or removing disks while the computer is running, then ASM can handle this as well. Figure 20-1 illustrates the ASM storage structure, represented as an entity-relationship diagram. One ASM instance can support multiple RDBMS instances, and it is possible (though in practice never done) to run more than one ASM instance to which the RDBMS instance could connect: this potential many-to-many relationship between the instance types is resolved by the Cluster Services. Another many-to-many relationship is between files and disks: one disk may store parts of many files, and one file can be distributed across many disks; this is resolved by the allocation units. The ASM Instance When using non-ASM files, an RDBMS instance will locate and open its files itself. In the ASM environment, these tasks are carried out by an ASM instance on behalf of the RDBMS instance. But even in an ASM environment, the RDBMS instance will always do its own I/O; its server processes will read the datafiles, its DBWn process will write to the datafiles, and the LGWR will write to the online redo log files. EXAM TIP Normal disk activity does not go through the ASM instance. ASM is a management and control facility that makes the files available; it does not do the actual I/O work. Figure 20-1 The components of ASM OCA/OCP Oracle Database 11g All-in-One Exam Guide 754 In some respects, an ASM instance is an instance like any other. It has an SGA and some of the usual background processes. But it cannot mount or open a database; all it can do is locate and manage ASM disks. Many instance parameters are not legal for an ASM instance. Usually, the parameter file (which may be a dynamic spfile or a static pfile) will have only half a dozen parameters. Because it cannot mount or open a database, it will never be able to read a data dictionary; for this reason, you can only connect to it with password file or operating system authentication, as SYSOPER or as SYSDBA, or as SYSASM. SYSASM is a role introduced with release 11g, to permit a separation of duties between the database administrator and the ASM administrator. It is possible to have more than one ASM instance running on one computer, but there is no value in doing this. You should create one ASM instance per computer and use it to manage all the ASM disks available to that computer on behalf of all the RDBMS instances running on the computer. An ASM instance will have two background processes in addition to the usual processes. These are the RBAL and the ARBn processes, used to handle rebalancing activity—the movement of data between ASM disks in an ASM disk group, in response to adding or removing a disk to or from the group. If a new device is added to a group, ASM will detect this and initiate an operation to bring the disk into use. This will mean moving data onto the disk, to take account of the increased possibilities for striping and to include the new disk in spreading the I/O workload evenly. The RBAL process coordinates this rebalancing, and the ARBn processes (several of these may be launched automatically) do the work. Also, if a disk leaves an ASM disk group, either by design or because of a hardware failure, a rebalancing operation is necessary to reestablish mirrors. In either case, the redistribution of data will occur without users being aware of the problem or the activity. EXAM TIP A rebalancing operation will start automatically in response to disk group reconfiguration. To create and configure disk groups, you must first connect to the ASM instance and start it. Once you have created and mounted the groups, the ASM instance will then wait for requests from RDBMS instances for access to files. The RDBMS Instance An RDBMS instance that is using ASM files functions as normal, except that it will have two additional background processes: RBAL and ASMB. The RBAL process opens the ASM disks, which it will locate through the ASM instance. The ASMB process connects to the ASM instance by creating a session against it, via a server process. This session is responsible for the continuing communication between the RDBMS instance and the ASM instance; in effect, the RDBMS instance becomes a client to the ASM server. The information passed over this session will be requests for physical changes, such as file creation, deletion, or resizing, and also various statistics and status messages. Chapter 20: Automatic Storage Management 755 PART III It is not necessary to inform the RDBMS instance of the name of the ASM instance. When an ASM instance starts, it registers its name and the names of the ASM disk groups it is managing with the Cluster Synchronization Service. This is why the Oracle cluster services must be running, even if the node and instance are not clustered. The RDBMS instance does know the names of the disk groups that it is using; these names are embedded in the ASM filenames stored (like any other filenames) in the RDBMS instance’s controlfile, and this lets the ASMB process instance locate the ASM instance managing those groups by interrogating the Cluster Synchronization Service. It can then make an IPC connection to the ASM instance. Commonly, an RDBMS instance will require access to only two disk groups: one for its live database files, the other for its flash recovery area. The ASM Files Files in ASM disk groups are managed by the ASM instance on behalf of the RDBMS instances. They are created, read, and written by the RDBMS instances. The files types that will commonly be stored as ASM files include any or all of these: • Controlfile • Dynamic initialization parameter file, the spfile • Online redo log files • Archive redo log files • Datafiles • Tempfiles • RMAN backup sets • RMAN image copies • Flashback logs • Controlfile autobackups • Data Pump dump files As this list shows, the whole database can be stored on ASM disks, as can all recovery- related files. You can direct the flash recovery area to an ASM disk group. EXAM TIP ASM does not manage the Oracle binaries, nor the alert log, trace files, and password file. All ASM files are striped across all the ASM disks in the group. The allocation of space is by allocation unit, or AU. The standard AU size is 1MB, and for files where data access tends to take the form of reasonably large disk I/O operations, such as datafiles or archive log files, the striping is also in 1MB units. This is known as “coarse” striping. For files where read and write requests are generally for smaller units of I/O, such as online logs and the controlfile, the AUs themselves are striped across the disks in . ASM Instances • 053.1.4 Administer ASM Disk Groups 747 OCA/ OCP Oracle Database 11g All-in-One Exam Guide 748 Automatic Storage Management, or ASM, is a facility provided with the Oracle database. open a database; all it can do is locate and manage ASM disks. Many instance parameters are not legal for an ASM instance. Usually, the parameter file (which may be a dynamic spfile or a static. ASM OCA/ OCP Oracle Database 11g All-in-One Exam Guide 754 In some respects, an ASM instance is an instance like any other. It has an SGA and some of the usual background processes. But it cannot