Filesystems
Application data access through a filesystem is much more common than access through raw storage. Filesystem meta structures are maintained by the filesystem's driver, removing the overhead from the application. For this reason, very few applications are written to perform raw I/O, except for a few database systems whose creators believe they can maintain the integrity or performance better than a standardized filesystem. This section addresses performance characteristics with regards to multiple filesystems and draws a comparison to the previous section on raw device access.
The configuration for this section is identical to the previous section. We use the IA64 host atlorca2. Filesystem types used for comparisons are xfs and ext3. To begin, we define a filesystem using the same disks from previous examples. By creating a filesystem, we simply add an additional layer to the data management overhead.
Journaling to a Separate Disk
The following fdisk output shows device sdj with one primary partition of type 83 known as Linux native. The objective is to compare performance of a large sequential read. We compare performance with raw to performance with XFS, illustrating performance overhead.
The first step in comparing performance between raw and filesystem is to set a baseline. Here we set a performance baseline for a single threaded read on a raw device named /dev/sdj.
atlorca2:~ # fdisk -l /dev/sdj Disk /dev/sdj: 250.2 GB, 250219069440 bytes 255 heads, 63 sectors/track, 30420 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdj1 1 30420 244348618+ 83 Linux
The fdisk output shows that partition 1 is active, with 250GB of capacity.
The XFS filesystem type offers the performance feature of creating a journal log on a disk separate from the filesystem data. Next we demonstrate this feature and test it. In this example, device sdk is the journal log device, and sdj is used for the metadata device.
atlorca2:~ # mkfs.xfs -f -l logdev=/dev/sdk1,size=10000b /dev/sdj1 meta-data=/dev/sdj1 isize=256 agcount=16, agsize=3817947 blks = sectsz=512 data = bsize=4096 blocks=61087152, imaxpct=25 = sunit=0 swidth=0 blks, unwritten=1 naming =version 2 bsize=4096 log =/dev/sdk1 bsize=4096 blocks=10000, version=1 = sectsz=512 sunit=0 blks realtime =none extsz=65536 blocks=0, rtextents=0
The following demonstrates a large block write and the performance boost that an external logger offers.
atlorca2:/xfs.test # dd if=/dev/zero of=/xfs.test/zero.out bs=512k atlorca2:~ # iostat -x 1 100|egrep "sdj|sdk" Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sdj 0.00 32512.00 0.00 281.00 0.00 262144.00 0.00 131072.00 932.90 141.99 394.24 3.56 100.00 sdk 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdj 0.00 36576.00 0.00 277.00 0.00 294929.00 0.00 147464.50 1064.73 142.75 511.21 3.61 100.00 sdk 0.00 0.00 0.00 1.00 0.00 5.00 0.00 2.50 5.00 0.10 105.00 105.00 10.50 sdj 0.00 35560.00 0.00 279.00 0.00 286736.00 0.00 143368.00 1027.73 141.26 380.54 3.58 100.00 sdk 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdj 0.00 36576.00 0.00 277.00 0.00 294912.00 0.00 147456.00 1064.66 142.35 204.65 3.61 100.00 sdk 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
The journal log device provides little to no added performance. There is almost no I/O to disk sdk, which contains the log. Putting the log on a separate disk won't help performance because we are not diverting a meaningful amount of I/O. The journal log device provides no added benefit because the intent to modify/write is established at the beginning of the file access. From this point forward, the I/O is completed on the meta device, as depicted in the previous iostat.
Determining I/O Size for Filesystem Requests
As shown previously, the dd command bs option sets the block size to 512k for each I/O transaction. We can see this in iostat. To calculate this value, find the average request size (avgrq-sz) column from iostat. In this example, we find that avgrq-sz has a value of 1024 sectors. To calculate the block size, multiply avgrq-sz by the sector size (512 bytes). In this example:
1024 (sectors) x 512 (bytes/sector) = 524288 (bytes) / 1024 (KB/bytes) = 512KB
However, the same dd command using the XFS filesystem reveals that the largest avgrq-sz value set forth by a sequential read is equal to 256 (sectors) regardless of block size set by dd. Following the same calculations, we determine that an XFS sequential read has a block size set to 128KB. The block size of any I/O is an important item to understand because not all programs control the block size. A program can request a given block size; however, a lower-layer driver can require that the I/O request be broken into smaller requests as illustrated in the previous iostat output.
Thus we have demonstrated that using a remote journal provides no performance improvements for large data access on sequential reads and writes. However, when writing to our XFS filesystem with a large number of small files, the opposite becomes true: The remote journal does in fact help.
Loading a Filesystem with Small Block I/O Transfers
In the next test, we write 512KB files as fast as the system allows while watching the load on the journal device and the filesystem meta device. To run this test, we must write a short, simple program to control the number of files to create during our test phase. The program is as follows:
#!/bin/sh count=1 total=0 while [ $total -ne $* ] do total='expr $total + $count' touch $total dd if=/xfs.test/zero.out of=/xfs.test/$total bs=512k > /dev/null 2>&1 #Using dd to control the BS. done
This program uses the same dd command used throughout our testing. It is important to always keep the control in any test constant. By running the previous program, thousands of 512KB files are created, causing an increased load on the journal log device (sdk), as depicted in the following listing:
atlorca2:/xfs.test # ./count_greg.sh 10000 atlorca2:~ # iostat -x 1 100|egrep "sdj|sdk" Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sdj 0.00 16219.00 0.00 181.00 0.00 131169.00 0.00 65584.50 724.69 9.31 54.90 2.77 50.20 sdk 0.00 0.00 0.00 53.00 0.00 3200.00 0.00 1600.00 60.38 1.43 28.96 9.40 49.80 sdj 0.00 20274.00 0.00 201.00 0.00 163936.00 0.00 81968.00 815.60 11.90 54.74 2.83 56.90 sdk 0.00 0.00 0.00 39.00 0.00 2752.00 0.00 1376.00 70.56 1.19 26.26 14.00 54.60 sdj 0.00 20273.00 0.00 198.00 0.00 163936.00 0.00 81968.00 827.96 10.96 54.34 2.77 54.80 sdk 0.00 0.00 0.00 50.00 0.00 3072.00 0.00 1536.00 61.44 1.43 30.48 10.90 54.50 sdj 0.00 16217.00 0.00 200.00 0.00 131138.00 0.00 65569.00 655.69 10.22 56.56 2.71 54.10 sdk 0.00 0.00 0.00 50.00 0.00 2982.00 0.00 1491.00 59.64 1.37 28.78 10.92 54.60
By creating thousands of files or modifying the same file thousands of times a minute, we can see the added load on the journal device sdk as well as the filesystem device sdj. By understanding the end goal, we can make better decisions about how to size and lay out a filesystem.
In addition, notice the block size of the I/O request submitted to the filesystem and how the filesystem responded. In the previous example on a sequential read, the block size is restricted to 128K. However, on a sequential write, the blocking structure on the XFS filesystem is that which is set forth by the command calling the SCSI write, setting the average request size to 512k as shown in the previous iostat illustration. However, will the same results be found using a completely different filesystem?
Let's repeat our test on ext3, also known as ext2 with journaling, to depict I/O latency and blocking factors.
atlorca2:/ext3.test # dd if=/ext3.test/usr.tar of=/dev/null bs=512k atlorca2:/ # iostat -x 1 100| grep sdj Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sdj 60.00 0.00 898.00 0.00 229856.00 0.00 114928.00 0.00 255.96 0.98 1.10 1.09 98.20 sdj 58.00 0.00 939.00 0.00 240360.00 0.00 120180.00 0.00 255.97 0.98 1.04 1.04 97.50 sdj 62.00 1.00 918.00 3.00 234984.00 32.00 117492.00 16.00 255.17 0.97 1.06 1.05 97.00 sdj 62.00 0.00 913.00 0.00 233704.00 0.00 116852.00 0.00 255.97 0.98 1.07 1.07 97.80 sdj 58.00 0.00 948.00 0.00 242664.00 0.00 121332.00 0.00 255.97 0.96 1.01 1.01 96.20 sdj 62.00 0.00 933.00 0.00 238824.00 0.00 119412.00 0.00 255.97 0.97 1.04 1.04 97.30
The sequential read test holds the same blocking factor as previously seen on XFS, with a little savings on overhead with respect to average wait time and service time. To continue our example, let's proceed with the file create script discussed previously.
#!/bin/sh count=1 total=0 while [ $total -ne $* ] do total='expr $total + $count' touch $total dd if=/ext3.test/zero.out of=/ext3.test/$total bs=512k > /dev/null 2>&1 #Using dd to control the BS. done
atlorca2:/ext3.test # ./count_greg.sh 10000 atlorca2:/ # iostat -x 1 100| grep sdj Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sdj 0.00 51687.00 0.00 268.00 0.00 416864.00 0.00 208432.00 1555.46 128.62 346.74 3.45 92.40 sdj 0.00 27164.00 1.00 264.00 8.00 219040.00 4.00 109520.00 826.60 139.42 521.48 3.77 100.00 sdj 0.00 656.00 1.00 113.00 8.00 5312.00 4.00 2656.00 46.67 21.95 516.42 3.85 43.90 sdj 0.00 0.00 1.00 0.00 8.00 0.00 4.00 0.00 8.00 0.00 1.00 1.00 0.10 sdj 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdj 0.00 52070.00 1.00 268.00 8.00 419936.00 4.00 209968.00 1561.13 128.44 346.09 3.43 92.30 sdj 0.00 27800.00 0.00 271.00 0.00 224160.00 0.00 112080.00 827.16 139.27 513.93 3.69 100.00 sdj 0.00 662.00 2.00 112.00 16.00 5368.00 8.00 2684.00 47.23 20.45 489.31 3.91 44.60 sdj 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdj 0.00 0.00 1.00 0.00 8.00 0.00 4.00 0.00 8.00 0.00 0.00 0.00 0.00 sdj 0.00 51176.00 0.00 273.00 0.00 412792.00 0.00 206396.00 1512.06 128.12 336.55 3.37 92.00 sdj 0.00 27927.00 0.00 274.00 0.00 225184.00 0.00 112592.00 821.84 138.00 510.66 3.65 100.00 sdj 0.00 658.00 0.00 105.00 0.00 5328.00 0.00 2664.00 50.74 17.44 492.92 3.57 37.50 sdj 0.00 0.00 1.00 128.00 8.00 1024.00 4.00 512.00 8.00 2.88 22.32 0.35 4.50
Notice how the filesystem buffers the outbound I/O, submitting them to the SCSI layer in a burst pattern. Though nothing is wrong with this I/O pattern, you must understand that the larger the burst, the larger the strain on the storage array. For example, exchange servers save up and de-stage out a burst of I/O operations, which can flood an array's write pending cache, so you should monitor for excessive write burst. However, the write block size maintains a 512k average block, which is similar to the XFS on writes with large block requests.
Utilizing Key Benefits of a Filesystem
As we've seen, I/O block sizes, stripe size, and filesystem layouts have unique benefits that aid I/O performance. In addition to these items, most filesystems have unique characteristics that are designed to guess the next request, trying to save resources by anticipating requests. This is accomplished by read-ahead algorithms. Ext2, Ext3, JFS, and XFS all have the capability to perform read-ahead, as shown with XFS in the following XFS source code for mounting /usr/src/linux/fs/xfs/xfs_mount.c:
/* * Set the number of readahead buffers to use based on * physical memory size. */ if (xfs_physmem <= 4096) /* <= 16MB */ mp->m_nreadaheads = XFS_RW_NREADAHEAD_16MB; else if (xfs_physmem <= 8192) /* <= 32MB */ mp->m_nreadaheads = XFS_RW_NREADAHEAD_32MB; else mp->m_nreadaheads = XFS_RW_NREADAHEAD_K32; if (sbp->sb_blocklog > readio_log) { mp->m_readio_log = sbp->sb_blocklog; } else { mp->m_readio_log = readio_log; } mp->m_readio_blocks = 1 << (mp->m_readio_log - sbp->sb_blocklog); if (sbp->sb_blocklog > writeio_log) { mp->m_writeio_log = sbp->sb_blocklog; } else { mp->m_writeio_log = writeio_log; } mp->m_writeio_blocks = 1 << (mp->m_writeio_log - sbp->sb_blocklog);
Although read-ahead is a powerful attribute, a concern exists. It is not fair to say that read-ahead causes these drawbacks, as a true increase in read performance can be seen on any filesystem that uses read-ahead functionality. When using read-ahead, filesystem block size is an important factor. For example, if filesystem block size is 8k and sequential read pattern exist where an application is reading 1K sequential blocks (index), readahead kicks in and pulls an extra predefined number of blocks, where each block is equal to the filesystem block size. To sum up the concern with read-ahead, one must be careful not to read in more data than is needed. Another performance boost can be found by utilizing buffer cache.
Filesystems such as XFS, Reiser, Ext2, and Ext3 use the buffer cache and reduce the amount of memory for an application to process data in the buffer cache, forcing more physical I/O (PIO). Later in this chapter we discuss the difference between raw, Oracle Cluster File System (OCFS), and XFS in an example with buffer cache and readahead. Before we jump too far ahead, though, we need to cover one last topic with respect to disk performance.
Linux and Windows Performance and Tuning Sector Alignments
We have covered some in-depth I/O troubleshooting tactics for character and block devices. Now we need to address the rumor mill about disk geometry alignment. Geometry alignment, also known as sector alignment, is the new craze in Windows performance tweaking. Cylinders lie in a small band, like a ring on a platter. The cylinders are then divided into tracks (wedges), which contain sectors, which are described in great detail in Chapter 6, "Disk Partitions and File Systems." However, to discuss performance concerns with sector alignment, we would like to first depict sector locations (see Figure 4-1).
Figure 4-1 Cylinders, tracks, and sectors
Sector alignment provides little to no performance boost in Linux. To date, no issues exist with regards to how partitions and filesystems interact with sectors alignment for a given platter, regardless of whether the platter is logical or physical within Linux. However, for those who are interested, a performance boost has been documented with respect to DOS 6.X, Windows 2000, and greater. See http://www.microsoft.com/resources/documentation/Windows/2000/server/reskit/en-us/Default.asp?url=/resources/documentation/Windows/2000/server/reskit/en-us/prork/pree_exa_oori.asp and http://www.microsoft.com/resources/documentation/windows/2000/professional/reskit/en-us/part6/proch30.mspx for more information.
Performance Tuning and Benchmarking Using bonnie++
Now that we have covered some basics guidelines about I/O performance metrics, we need to revisit our primary goal. As already mentioned, our primary goal is to deliver methods for finding performance problems. In all circumstances, a good performance snapshot should be taken at every data center before and after changes to firmware roles, fabric changes, host changes, and so on.
The following is generalized performance data from a single LUN RAID 5 7d+1p, and it is provided to demonstrate the performance benchmark tool called bonnie. The following test does not depict the limit of the array used for this test. However, the following example enables a brief demonstration of a single LUN performance characteristic between three filesystems. The following bonnie++ benchmark reflects the results of the equipment used throughout this chapter with XFS and Ext3 filesystems.
atlorca2:/ext3.test # bonnie++ -u root:root -d /ext3.test/bonnie.scratch/ -s 8064m -n 16:262144:8:128 Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP atlorca2 8064M 15029 99 144685 44 52197 8 14819 99 124046 7 893.3 1 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- - Delete-- files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16:262144:8/128 721 27 14745 99 4572 37 765 28 3885 28 6288 58
Testing with XFS and journal on the same device yields the following:
atlorca2:/ # mkfs.xfs -f -l size=10000b /dev/sdj1 meta-data=/dev/sdj1 isize=256 agcount=16, agsize=3817947 blks = sectsz=512 data = bsize=4096 blocks=61087152, imaxpct=25 = sunit=0 swidth=0 blks, unwritten=1 naming =version 2 bsize=4096 log =internal log bsize=4096 blocks=10000, version=1 = sectsz=512 sunit=0 blks realtime =none extsz=65536 blocks=0, rtextents=0 atlorca2:/ # mount -t xfs -o logbufs=8,logbsize=32768 /dev/sdj1 /xfs.test atlorca2:/ # mkdir /xfs.test/bonnie.scratch/ atlorca2:/ # mount -t xfs -o logbufs=8,logbsize=32768 /dev/sdj1 /xfs.test atlorca2:/xfs.test # bonnie++ -u root:root -d /xfs.test/bonnie.scratch/ -s 8064m -n 16:262144:8:128 Version 1.03 ------Sequential Output------ --Sequential Input- -- Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP atlorca2 8064M 15474 99 161153 21 56513 8 14836 99 125513 9 938.8 1 ------Sequential Create------ -------- Random Create-- ------ -Create-- --Read--- -Delete-- -Create-- --Read--- - Delete-- files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16:262144:8/128 1151 24 12654 100 9705 89 1093 22 12327 99 6018 71
Testing with XFS without remote journal provides these results:
atlorca2:~ # mount -t xfs -o logbufs=8,logbsize=32768,logdev=/dev /sdk1/dev/sdj1 /xfs.test atlorca2:/xfs.test # mkdir bonnie.scratch/ atlorca2:/xfs.test # bonnie++ -u root:root -d /xfs.test/bonnie.scratch/ -s 8064m -n 16:262144:8:128 Version 1.03 ------Sequential Output------ --Sequential Input- -- Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- -- Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP atlorca2 8064M 15385 99 146197 20 58263 8 14833 99 126001 9 924.6 1 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- - Delete-- files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16:262144:8/128 1175 24 12785 100 10236 95 1097 22 12280 99 6060 72
Just by changing the filesystem and journal log location, we pick up some nice performance on sequential I/O access with respect to XFS. The point of the previous demonstration is to identify factors other than hardware that increase performance. As we have seen, simply changing the filesystem layout or type can increase performance greatly. One other performance tool we enjoy using for SCSI measurements is IOzone, found at www.iozone.org.
Assessing Application CPU Utilization Issues
As with any performance problem, usually more than one factor exists. Our troubleshooting performance journey continues with coverage of application CPU usage and how to monitor it. In this section, CPU usage and application-specific topics are covered, focusing on process threads.
Determining What Processes Are Causing High CPU Utilization
To begin, we want to demonstrate how a few lines of code can load a CPU to a 100% busy state. The C code "using SLES 9 with long integer" illustrates a simple count program, which stresses the CPU in user space. It is important to understand that our application is not in system space, also called kernel mode, because we are not focusing on any I/O as previously discussed in this chapter.
Example 1 goes like this:
#include <stdio.h> #include <sched.h> #include <pthread.h> /* POSIX threads */ #include <stdlib.h> #define num_threads 1 void *print_func(void *); int main () { int x; printf("main() process has PID= %d PPID= %d\n", getpid(), getppid()); pthread_t tid[num_threads]; /* Now to create pthreads */ for (x=0; x <= num_threads;x++) pthread_create(tid + x, NULL, print_func, NULL ); /*wait for termination of threads before main continues*/ for (x=0; x < num_threads;x++) { pthread_join(tid[x], NULL); printf("Main() PID %d joined with thread %d\n", getpid(), tid[x]); } } void *print_func (void *arg) { long int y; printf("PID %d PPID = %d TID = %d\n",getpid(), getppid(), pthread_self()); /* creating a very large loop for the CPU to chew on :) */
/* Note, the following line may be changed to: for (y=1;y>0;y++) */
for (y=1; y<10000000000000;y++) printf ("%d\n", y); return 0; }
Note that instead of just counting to a large integer, you may want to create an infinite loop. The C code in Example 2 (also shown in Chapter 8, "Linux Processes: Structures, Hangs, and Core Dumps") generates an infinite loop with a signal handler used to kill the threads.
#include <pthread.h> /* POSIX threads */ #include <signal.h> #include <stdlib.h> #include <linux/unistd.h> #include <errno.h> #define num_threads 8 void *print_func(void *); void threadid(int); void stop_thread(int sig); _syscall0(pid_t,gettid) int main () { int x; pid_t tid; pthread_t threadid[num_threads]; (void) signal(SIGALRM,stop_thread); /*signal handler */ printf("Main process has PID= %d PPID= %d and TID= %d\n", getpid(), getppid(), gettid()); /* Now to create pthreads */ for (x=1; x <= num_threads;++x) pthread_create(&threadid[x], NULL, print_func, NULL ); sleep(60); /* Let the threads warm the cpus up!!! :) */ for (x=1; x < num_threads;++x) pthread_kill(threadid[x], SIGALRM); /*wait for termination of threads before main continues*/ for (x=1; x < num_threads;++x) { printf("%d\n",x); pthread_join(threadid[x], NULL); printf("Main() PID %d joined with thread %d\n", getpid(), threadid[x]); } } void *print_func (void *arg) { printf("PID %d PPID = %d Thread value of pthread_self = %d and TID= %d\n",getpid(), getppid(), pthread_self(),gettid()); while(1); /* nothing but spinning */ } void stop_thread(int sig) { pthread_exit(NULL); }
To continue with Example 1, compile the code and run the application as follows:
atlorca2:/home/greg # cc -o CPU_load_count CPU_Load_count.c -lpthread atlorca2:/home/greg # ./CPU_load_count | grep -i pid main() process has PID= 5970 PPID= 3791 PID 5970 PPID = 3791 TID = 36354240 PID 5970 PPID = 3791 TID = 69908672
When running top, shown next, we find that our CPUs have a load, yet the PID reports zero percent usage for the CPU_load_count program. So where is the load coming from? To answer this question, we must look at the threads spawned by the parent process.
atlorca2:~ # top top - 20:27:15 up 36 min, 3 users, load average: 1.24, 1.22, 1.14 Tasks: 79 total, 1 running, 78 sleeping, 0 stopped, 0 zombie Cpu0 : 0.1% us, 0.6% sy, 0.0% ni, 99.1% id, 0.2% wa, 0.0% hi, 0.0% si
Cpu1 : 1.7% us, 1.6% sy, 0.0% ni, 96.7% id, 0.0% wa, 0.0% hi, 0.0% si Cpu2 : 44.1% us, 19.0% sy, 0.0% ni, 36.9% id, 0.0% wa, 0.0% hi, 0.0% si Cpu3 : 43.3% us, 21.0% sy, 0.0% ni, 35.7% id, 0.0% wa, 0.0% hi, 0.0% si Mem: 4142992k total, 792624k used, 3350368k free, 176352k buffers Swap: 1049568k total, 0k used, 1049568k free, 308864k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ PPID RUSER UID WCHAN COMMAND 5974 root 16 0 3632 2112 3184 R 0.7 0.1 0:00.13 3874 root 0 - top 5971 root 15 0 3168 1648 2848 S 3.3 0.0 0:02.16 3791 root 0 pipe_wait grep 5970 root 17 0 68448 1168 2560 S 0.0 0.0 0:00.00 3791 root 0 schedule_ CPU_load_count 5966 root 16 0 7376 4624 6128 S 0.0 0.1 0:00.01 5601 root 0 schedule_ vi 5601 root 15 0 5744 3920 4912 S 0.0 0.1 0:00.11 5598 root 0 wait4 bash 5598 root 16 0 15264 6288 13m S 0.0 0.2 0:00.04 3746 root 0 schedule_ sshd
You can see top shows that the CPU_Load_count program is running, yet it reflects zero load on the CPU. This is because top is not thread-aware in SLES 9 by default.
A simple way to determine a thread's impact on a CPU is by using ps with certain flags, as demonstrated in the following. This is because in Linux, threads are separate processes (tasks).
atlorca2:/proc # ps -elfm >/tmp/ps.elfm.out
vi the file, find the PID, and focus on the threads in running (R) state.
F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD 4 - root 5970 3791 0 - - - 4276 - 20:26 pts/0 00:00:00 ./CPU_load_count 4 S root - - 0 77 0 - - schedu 20:26 - 00:00:00 - 1 R root - - 64 77 0 - - schedu 20:26 - 00:02:41 - 1 S root - - 64 79 0 - - schedu 20:26 - 00:02:40 -
Though top provides a great cursory view of your system, other performance tools are sometimes required to find the smoking gun, such as the ps command in the preceding example. Other times, performance trends are required to isolate a problem area, and products such as HP Insight Manager with performance plug-ins may be more suited.
Using Oracle statspak
In the following example, we use Oracle's statistics package, called statspak, to focus on an Oracle performance concern.
DB Name DB Id Instance Inst Num Release Cluster Host ------------ ----------- ---------- -------- --------- ------- ---------- DB_NAME 1797322438 DB_NAME 3 9.2.0.4.0 YES Server_name Snap Id Snap Time Sessions Curs/Sess Comment ------- ------------------ -------- --------- ------------------- Begin Snap: 23942 Date 11:39:12 144 3.0 End Snap: 23958 Date 11:49:17 148 3.1 Elapsed: 10.08 (mins) Cache Sizes (end) ~~~~~~~~~~~~~~~~~ Buffer Cache: 6,144M Std Block Size: 8K Shared Pool Size: 1,024M Log Buffer: 10,240K Load Profile ~~~~~~~~~~~~ Per Second Per Transaction --------------- --------------- Redo size: 6,535,063.00 6,362.58 Logical reads: 30,501.36 29.70 <-Cache Reads LIO Block changes: 15,479.36 15.07 Physical reads: 2,878.69 2.80 <-Disk reads PIO Physical writes: 3,674.53 3.58 <-Disk Writes PIO User calls: 5,760.67 5.61 Parses: 1.51 0.00 Hard parses: 0.01 0.00 Sorts: 492.41 0.48 Logons: 0.09 0.00 Executes: 2,680.27 2.61 Transactions: 1,027.11 % Blocks changed per Read: 50.75 Recursive Call %: 15.27 Rollback per transaction %: 0.01 Rows per Sort: 2.29 Instance Efficiency Percentages (Target 100%) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Buffer Nowait %: 99.98 Redo NoWait %: 100.00 Buffer Hit %: 90.57 In-memory Sort %: 100.00 Library Hit %: 100.00 Soft Parse %: 99.45 Execute to Parse %: 99.94 Latch Hit %: 98.37 Parse CPU to Parse Elapsed %: 96.00 % Non-Parse CPU: 99.99 Shared Pool Statistics Begin End ------ ------ Memory Usage %: 28.16 28.17 % SQL with executions>1: 82.91 81.88 % Memory for SQL w/exec>1: 90.94 90.49 Top 5 Timed Events ~~~~~~~~~~~~~~~~~~ % Total Event Waits Time (s) Ela Time ---------------------------------------- ------------ ----------- -------- db file sequential read 1,735,573 17,690 51.88 log file sync 664,956 8,315 24.38 CPU time 3,172 9.32 global cache open x 1,556,450 1,136 3.36 log file sequential read 3,652 811 2.35 -> s - second -> cs - centisecond - 100th of a second -> ms - millisecond - 1000th of a second -> us - microsecond - 1000000th of a second Avg Total Wait Wait Waits Event Waits Timeouts Time (s) (ms) /txn ---------------------- ------------ ---------- ---------- ------ -------- db file sequential read 1,738,577 0 17,690 10 2.8
This statspak output is truncated to conserve space. Some of the key points of interest are highlighted in bold: Logical reads (LIO), Physical reads/writes (PIO), Latches, Buffered I/O, and db file sequential read. Please understand that many hundred parameters exist, and Oracle has published many performance documents and created many classes that we recommend reading and taking. However, this chapter's main goal is performance troubleshooting from a wide view, or in other words, a quick reference guide to performance troubleshooting. The bold areas in the previous listing are some critical places to focus, especially with LIO and PIO.
To elaborate on the bold elements in the previous listing, we begin with LIO and PIO. LIO is the access of a memory register, residing in the database buffer cache, and PIO is an I/O operation request from Oracle to the system for a data block fetch from spindle. In short, LIO is faster than PIO, but PIO is not always the bottleneck. A critical piece of the puzzle with regards to performance problems on any database connected to any storage device is understanding that a small percentage of PIOs (from Oracle's viewpoint) are actually a read from the host cache, storage cache, or both. Thus, a PIO from the database's perspective may in fact still be a read from cache, so having a high number of PIOs is not always a bad omen. In the previous example, the PIO reads were around 1.7 million with an average latency of 10ms, which, by the way, is not bad. However, although the I/O is fine, a memory control performance problem may still linger in the background.
Thus, while I/O may in fact be fine, memory lock control also must be addressed for smooth, fast database operation. Note that in this example, the Latch hit percentage is around 98%, which raises a red flag. A latch is basically a protection agent for access control with regards to shared memory within the system's global area (SGA). In short, the goal is to keep memory available so that the latch-free count remains high, keeping the percentage of Latch hit percentage around 99.5%. Latch info can be viewed by reviewing both the willing-to-wait and not-willing-to-wait latches found in the immediate_gets and immediate_misses columns by using V$LATCH or by looking at Oracle's statspack. In addition to waiting on free memory segments with regards to latches, we need to touch on buffer waits.
When a database starts to show an increase in buffer waits, the objective is to focus on the two main issues. The first issue is that memory is running low, which is impacting the second issue, that of physical I/O read/writes. A buffer wait is logged when the database must flush a write to spindle to clear up some available cache for data processing. The quick solution to this problem is to run raw (or other proprietary filesystem such as OCFS) to bypass host buffer cache so that buffer cache is used for data processing only. However, the huge drawback to using a non-buffer cache filesystem is the loss of performance with respect to read-ahead as discussed earlier in this chapter.
Now that we have covered some I/O basics, both physical and logical, and memory control with respect to latches, we present an example of an application failing to initialize due to lack of shared memory space. We demonstrate a lack of shared memory without going into detail about system V message queues, semaphores, or shared memory. As with all applications, more memory equals faster performance, and without enough memory, system failure is imminent.
Troubleshooting "No Space Left on Device" Errors When Allocating Shared Memory
Our example shows a common error 28 example using a 64-bit kernel, with a 64-bit application failing to initialize due to a memory address problem. Our focus is on interprocess communication (IPCS), as with any large application that spawns multiple threads/processes. Using our 64-bit machine, we bring a 64-bit Oracle 10g instance online, which fails with the following error to demonstrate a failed IPCS.
ORA-27102: out of memory Linux-x86_64 Error: 28: No space left on device
System parameters are as follows:
# ipcs ------ Shared Memory Segments -------- key shmid owner perms bytes nattch status 0x00000000 1245184 gdm 600 393216 2 dest 0x852af124 127926273 oracle 640 8558477312 15 ------ Semaphore Arrays -------- key semid owner perms nsems 0x3fbfeb1c 1933312 oracle 640 154 ==== kernel parameters ====== # sysctl -a kernel.sem = 250 32000 100 128 kernel.msgmnb = 16384 kernel.msgmni = 16 kernel.msgmax = 8192 kernel.shmmni = 4096 kernel.shmall = 2097152 kernel.shmmax = 34359738368 ==== process ulimits (bash shell) $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) 4 max memory size (kbytes, -m) unlimited open files (-n) 65536 pipe size (512 bytes, -p) 8 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 16384 virtual memory (kbytes, -v) unlimited
This failure is a result of the kernel not being able to fulfill the shared memory request. Not enough space is a condition explained in /usr/src/linux/ipc/shm.c, which reads:
if (shm_tot + numpages >= shm_ctlall) return -ENOSPC;
The program we tried to start previously required more shared memory than we had allocated, which in turn caused the Oracle application to fail on initialization. The solution is to increase shared memory by the kernel parameter. In this example, we simply increase it to shmall=8388608.
Additional Performance Tools
As we conclude this chapter, we cover some uncommon tools that can be used to monitor performance characteristics and build charts in most cases. isag, RRDtool, Ganglia (which uses RRDtool to monitor grid computing and clustering), and Nagios are great performance tools. More monitoring tools exist, but for the most part, they are common tools used every day such as sar, iostat, top, and netstat. Due to space limitations, we only cover isag in this chapter. However, the other tools are easy to find and configure if one so desires. isag, found at http://www.volny.cz/linux_monitor/isag/index.html, provides a nice graphical front end to sar. After systat tools have been loaded, isag should be included, as depicted here:
atlorca2:/tmp # rpm -qf /usr/bin/isag sysstat-5.0.1-35.1
isag
Most engineers who work with sar, iostat, and other performance tools will enjoy using isag, the GUI front end to sar. To give a quick demonstration of how the tool works, we must enable sar to collect some data. To achieve a real-world demonstration, we repeat our previous bonnie++ test while running sar -A to collect as much detail as possible and display it through isag.
To demonstrate, we mount an XFS filesystem with a local journal and use an NFS mount point to a HPUX server to demonstrate both disk and network load through bonnie++ while monitoring through sar and isag.
atlorca2:/var/log/sa # mount -t xfs -o logbufs=8,logbsize=32768 /dev/sdj1 /xfs.test atlorca2:/var/log/sa # df Filesystem 1K-blocks Used Available Use% Mounted on hpuxos.atl.hp.com:/scratch/customers 284470272 50955984 218940848 19% /scratch/customers /dev/sdj1 244308608 4608 244304000 1% /xfs.test atlorca2:/var/log/sa # sar -A -o 1 100 #This will build a fine in /var/log/sa that isag will use. atlorca2:/var/log/sa # isag
The resulting screenshots provide information on CPU utilization (see Figure 4-2) and swap utilization (see Figure 4-3).
Remember, if you are swapping, you need more memory.
Figure 4-2 CPU utilization
Figure 4-3 Swap utilization