- Solaris Volume Manager Performance Overview
- Solaris Volume Manager Striping Considerations
- Software RAID Considerations
- Multipathing
- Solaris Volume Manager Performance With UFS File Systems and Oracle
- Administration Tips
- Summary
- About the Author
- References
- Accessing Sun Documentation
Solaris Volume Manager Performance With UFS File Systems and Oracle
The debate over file systems versus raw volumes is less of a topic of discussion these days. A vast majority of data centers use file systems and have very good reasons for doing so. Sun understands this and has improved the performance of UFS file systems to nearly match that of raw file systems. With the introduction of the Concurrent Direct I/O feature in the Solaris 8 3/01 OE, the last of the UFS performance bottlenecks have disappeared.
To get Solaris Volume Manager software to perform well with UFS file systems, no special performance tuning is required. To enable good UFS performance, you need to address a few simple points:
Write-On-Write is not a problem for Oracle database software. The metainit man page describes a problem that can cause both sides of a mirror to have different data. If the contents of buffers are changed while the data is in-flight to disk, then different data can end up on each side of the mirror. The metainit man page suggests the following /etc/system file setting:
md_mirror:md_mirror_wow_flg=0x20
This setting results in stable copies for raw and direct I/O. If this parameter is set, it significantly degrades write performance. Oracle software does not exhibit this problem because it does not allow changes to the buffer while a write is in-flight.
Use an 8-Kbyte database block size in conjunction with an 8-Kbyte fragment size for the file system. This block size ensures alignment of database blocks with underlying storage. If blocks are not properly aligned or are too small, write performance can suffer. Consider the example of a database using 2-Kbyte blocks on a file system. The file system has a block size of 8 Kbytes, so 4 database blocks will fit in one FS block. When the database issues a write on a 2-Kbyte block, the file system must write all 8 Kbytes at once. This means that the remaining 75% of the block will have to be read before it can be written. If you use a 2-Kbyte database block size on file systems, you can prevent the read-modify-write phenomenon by using the forcedirectio mount option to bypass the file system on I/O operations.
CAUTION
If you are currently using UFS file systems and are not mounting them with the forcedirectio option, be careful to analyze your application before enabling direct I/O. It is quite possible that some of the objects in your database are benefiting from the UFS buffer cache. Turning on direct I/O can increase the amount of physical I/O and reduce the transaction rate as a result of bypassing the FS cache.
If you are using direct I/O, make sure that you are using the Solaris 8 03/01 OE at minimum. The Solaris 8 03/01 OE provides Concurrent Direct I/O which eliminates the single writer lock that can dramatically improve performance.
Mount file systems with the logging option. This saves on recovery time.
If you are not mounting with the forcedirectio option, be aware of the SEGMAP_PERCENT default in the Solaris 8 OE. This variable was introduced to restrict the amount of memory used for address translations. By default, this value is 12 percent of physical memory. A large memory system with heavy UFS usage can benefit from a higher value.