Mirroring Shared Storage Arrays
The CDP 280/3 includes two Sun StorEdge T3 arrays, one mirrored to the other. A single array contains nine, 36-Gbyte disk drives, eight of which are arranged in a controller RAID-5 configuration (seven data disks plus one parity) with the remaining disk left over as a hot spare. The total amount of usable, shared disk storage is approximately 225 Gbytes (seven disks multiplied by 36 Gbytes each, minus overhead).
This configuration provides availability with reasonable storage capacity for the Sun StorEdge™ T3 arrays. You can change it if desired, as long as the resulting configuration follows accepted best practices. Each array is configured at the factory with a 32-Kbyte data stripe width.
When VxVM software is used to manage the shared storage (in all cases except HA ORACLE data service with Solaris Volume Manager software), the system makes use of DRL to minimize the time required to remirror an array, if one fails. A VERITAS volume constructed using DRL contains an additional log "subdisk" (to borrow from VxVM software terminology), which stores a recovery map and two active maps (one for each node). The region of the volume being modified is marked as dirty in the log and flushed from the RAID cache before the actual writes of the database data take place. Once the data are written to both the primary Sun StorEdge™ T3 array and its mirror, that region is again marked as clean in the log's maps. Upon a system failure only those regions still marked dirty need to be applied between the primary and its mirror to bring the mirror up-to-date.
Mirroring may be implemented with Solaris Volume Manager software instead, if such a database configuration is chosen for HA ORACLE, as described previously.
As with the rest of the CDP 280/3, the Sun StorEdge™ T3 arrays are reconfigurable, in this case by using Sun StorEdge Component Manager software to access the RAID controller of each array.