- Overview
- Cycle-based, Round-robin
- Deadline-driven, Random
- Optimization Techniques
- Online Reconfiguration
- Supporting Multi-zone Disks
3.4 Optimization Techniques
3.4.1 Request Migration
By migrating one or more requests from a group with zero idle slots to a group with many idle slots, the system can minimize the possible latency incurred by a future request. For example, in Figure 3.8, if the system migrates a request for X from G4 to G2 then a request for Z is guaranteed to incur a maximum latency of one time period. Migrating a request from one group to another increases the memory requirements of a display because the retrieval of data falls ahead of its display. Migrating a request from G4 to G2 increases the memory requirement of this display by three buffers. This is because when a request migrates from G4 to G2 (see Figure 3.8), G4 reads X0 and sends it to the display. During the same time period, G3 reads X1 into a buffer (say, B0) and G2 reads X2 into a buffer (BI). During the next time period, G2 reads X3 into a buffer (B2) and X1 is displayed from memory buffer B0. (G2 reads X3 because the groups move one cluster to the right at the end of each time period to read the next block of active displays occupying its servers.) During the next time period, G2 reads X4 into a memory buffer (B3) while X2 is displayed from memory buffer B1. This round-robin retrieval of data from clusters by G2 continues until all blocks of X have been retrieved and displayed.
Figure 3.8 Load balancing.
With this technique, if the distance from the original group to the destination group is B then the system requires B + 1 buffers. However, because a request can migrate back to its original group once a request in the original group terminates and relinquishes its slot (i.e., a time slot becomes idle), the increase in total memory requirement could be reduced and become negligible.
3.4.2 Object Replication
Full Replication (FR)
To reduce the startup latency of the system, one may replicate objects. The simplest way is to replicate entire objects in the database so that all blocks have the same number of replicas. Let the original copy of an object X be its primary copy, XP. All other copies of X are termed its secondary copies. The system may construct r secondary copies for object X. Each of its copies is denoted as Xi where 1 ≤ i ≤ r. The number of instances of X is the number of copies of X, r + 1 (r secondary plus one primary). Assuming two instances of an object, by starting the assignment of X1 with a disk different than the one containing the first block of its primary copy (), the maximum startup latency incurred by a display referencing X can be reduced by one half. This also reduces the expected startup latency. The assignment of the first block of each copy of X should be separated by a fixed number of disks in order to maximize the benefits of replication. Assuming that the primary copy of X is assigned starting with an arbitrary disk (say di contains ), the assignment of secondary copies of X is as follows. The assignment of the first block of copy Xj should start with disk mod d. For example, if there are two secondary copies of object Y (Y1, Y2) assuming its primary copy is assigned starting with disk d0. is assigned starting with disk d2 while is assigned starting with disk d4 when d = 6.
With two instances of an object, the expected startup latency for a request referencing this object can be computed as follows. To find an available slot, the system simultaneously checks two groups corresponding to the two different disks that contain the first blocks of these two instances. A failure happens only if both groups are full, reducing the number of failures for a request. The maximum number of failures before a success is reduced to due to two simultaneous searching of groups in parallel. Therefore, the probability of i failures in a system with each object having two instances is identical to that of a system consisting of disks with 2N servers per disk. A request would experience a lower number of failures with more instances of objects.
This also reduces the expected startup latency. The assignment of the first block of each copy of Oi should be separated by a fixed number of disks in order to maximize the benefits of replication. Let denote the disk that stores the first block of jth replica of object Oi. Then, assuming Ri copies of Oi, from Eq. 3.2,
The location of an object, Oi, with Ri replicas can be represented by the set Ti:
For example, in a six-disk system, if there are two secondary copies of object Oi ( and ) assuming its primary copy, is assigned starting with disk d0. is assigned starting with disk d2 while is assigned starting with disk d4. Thus, Ti = {0, 2, 4}.
With two instances of an object, the expected startup latency for a request referencing this object can be computed as follows. To find an available server, the system simultaneously checks two groups corresponding to the two different disks that contain the first blocks of these two instances. A failure happens only if both groups are full, reducing the number of failures for a request. The maximum number of failures before a success is reduced to due to two simultaneous searching of groups in parallel. Therefore, the probability of i failures in a system with each object having two instances is identical to that of a system consisting of disks with 2N servers per disk. A request would experience a lower number of failures with more instances of objects. With j instances of an object in the system, the probability of a request referencing this object to observe i failures is:
where . Hence, the expected startup latency of requests that reference an object with j instances is:
Selective Replication (SR)
Object FR greatly increases the storage requirement of an application. One important observation in real applications is that objects may have different access frequencies. For example, in a video-on-demand system, more than half of the active requests might reference only a handful of recently released movies [2]. [34] models the empirical distribution of video rental frequency using a Zipf distribution. By replicating frequently referenced objects more times than less popular ones, i.e., selectively determine the number of replicas of an object based on its access frequency, we could significantly reduce the startup latency without a dramatic increase in storage space requirement of an application.
The optimal number of secondary copies per object is based on its access frequency and the available storage capacity. The formal statement of the problem is as follows. Assuming n objects in the system, let S be the total amount of disk space for these objects and their replicas. Let Rj be the optimal number of instances for object j, Sj to denote the size of object j and Fj to represent the access frequency (%) of object j. The problem is to determine Rj for each object j (1 ≤ j ≤ n) while satisfying .Sj ≤ S.
There exist several algorithms to solve this problem [93]. A simple one known as the Hamilton method computes the number of instances per object j based on its frequency by calculating the quota for an object (Fj × S). It rounds the remainder of the quota to compute Rj. However, this method suffers from two paradoxes, namely, the Alabama and Population paradoxes. Generally speaking, with these paradoxes, the Hamilton method may reduce the value of Rj when either S or Fj increases in value. The divisor methods provide a solution free of these paradoxes (see Figure 3.9). For further details and proofs of this method, see [93]. Using a divisor method named Webster (d(Rj) = Rj + 0.5), we classify objects based on their instances. Therefore, objects in a class have the same instances. The expected startup latency in this system with n objects is:
Figure 3.9 Divisor method to compute the number of replicas per object.
where is the expected startup latency for object having Ri instances.
Partial Replication (PR)
FR and SR replicate all blocks in an object. Considering the size of large SM objects and a bounded amount of available space for replication, replicating only the first small portion of each object several times could greatly reduce the extra space requirement while providing a much shorter startup latency. For example, we can replicate only the first 10 blocks of an object X when the number of blocks of X is 100. The placement of blocks follows the same way in the previous replication techniques. The assignment of requests is similar as in FR and SR but the system allocates a new request into the group which is currently accessing the disk where the first block of the primary copy of the requested object resides () whenever it is possible. In other words, when both and have at least one empty slot, a new request should be assigned to . However, if is full and has empty slots, a request goes to . Only when both have no empty slot, a request experiences a failure. This provides a same impact on the startup latency as in FR with two copies per object. The only difference is that a request assigned to should be relocated to before it reaches the last block of the partially replicated copy (tenth block in the previous example). The newly relocated request will retrieve the primary copy until the end of its display. For example, in Figure 3.10, a request for X arrives and is assigned to because is full. For the next seven time periods, the request is serviced in until releases a slot when a display ends. Then, the request is relocated from to and is serviced until the end of display. With PR, if we replicate the first 10% of an object 10 times, it takes two times of the original storage requirement (the size of the primary copy). As we discussed in the previous section, this can greatly reduce the startup latency as if the system has ten full copies of the object requiring ten times of the original storage requirement. However, there is a chance that a request assigned to the partially replicated copy could not be relocated to primary copy until the display reaches the end of partially replicated copy. Then, hiccups happen.
Figure 3.10 Partial replication technique.
If a disk drive can support N simultaneous displays and s is average display time (service time) of objects, the service rate of a disk drive (a group) is N/s. Hence, ideally, if we replicate the first s/N portion of an object, no hiccup happens. However, due to the statistical variation of time when requests arrive at and leave the system, there exists a probability that a request experiences hiccup in this technique. To reduce this probability, we should replicate more portion than s/N, resulting in a higher space requirement.
Request migration with temporary buffering can eliminate this hiccup problem. In Figure 3.11, assume that a request in G4 is accessing a partially replicated copy of X while G1 is accessing the primary copy of X. A hiccup happens when G1 does not have any idle slots until G4 reaches to the last block of a secondary copy. Request migration utilizes buffers to avoid a potential hiccup situation. For example, while G4 accesses the last block of the secondary copy X9, G0 reads the block X10 from the disk drive d0 and stores it into a temporary buffer in the same time period. From the next time period on, G0 retrieves the remaining blocks sequentially until the group G1 releases a time slot. Then the scheduler migrates the request to the group G1 and the temporary buffers are freed. Hence, hiccups don't happen even though G1 is full, as long as there exists at least one available slot in the entire groups. Note that there is a tradeoff between the amount of required buffers and the number of hiccups. Extra buffers increase system cost per display.
Figure 3.11 Request migration in the partial replication technique.
Partial Selective Replication (PSR)
This is a hybrid approach of PR and SR discussed in the previous subsections. By taking advantage of utilizing skewed access frequencies of an application and reduced storage requirement of PR, this approach determines the number of partially replicated secondary copies of an object based on its access frequency. Modifying the divisor method for partial replication is trivial and straightforward.
Comparison
We compare four replication techniques using simulation studies. Assuming two different access frequency models: 1) uniform and 2) skewed (Zipf), the average startup latency of each technique is quantified as a function of available extra disk storage space. In these experiments, we assumed that the entire database was disk resident. A server was configured with twelve Seagate Cheetah ST39103LW/LC disks. Each disk has 9 GBytes of disk space and 80 Mb/s of data transfer rate. We assumed that each video clip was encoded using MPEG2 with a 4 Mb/s of display rate. The database consisted of 30 1-hour video clips with a total of 54 GByte disk space. A video clip consisted of 3600 blocks with one second of time period. For FR and SR, the entire 3600 blocks were replicated while first 720 blocks (20%) were replicated for PR and PSR.
We assumed a uniform distribution of access frequency for 30 video clips, and also analyzed a skewed distribution (Zipf) of access as a simplified model of real access frequencies (Figure 3.12). The bandwidth of each disk can support sixteen simultaneous displays (N = 16). Hence, the maximum throughput of this configuration (12 disks) is 192 simultaneous displays. We assumed that request arrivals to the system followed a Poisson pattern with an arrival rate of λ = 0.05req/sec for a 95% system utilization. Upon the arrival of a request, if the scheduler fails to find an idle slot in the system then this request is rejected.
Figure 3.12 Two access frequency distributions.
Figure 3.13 shows the quantified average startup latency from our simulations using both uniform and Zipf access distributions. The X axis represents the storage space of a system as a multiple of database size. For example, 1.1 on X axis means an extra space of 10% of DB size. The y axis shows resulting average startup latency. Note that, when x = 1, it shows an average startup latency without any replication. Due to statistical variance in request arrivals, the system could reach to its maximum capacity, 192 simultaneous displays, then newly arrived requests were rejected during such a peak time. The rejection rate was 2.5% on the average for all experiments.
Figure 3.13 Average startup latency (ρ = 0.95).
As we increase extra disk space for replication, the average startup latency decreases as a function of it. PR and PSR provides the shortest startup latency with both uniform and Zipf distribution. In the best case, when x = 2, 79% and 77% of reduction in the average startup latency were observed with uniform and Zipf distribution, respectively. FR and PR show steps downward because they cannot create more number of replicas when the increase of extra space is not enough to replicate all objects in database. However, SR and PSR continuously decrease as extra space grows.
One observation is that the average startup latency can be significantly reduced even with a small amount of extra space. In Figure 3.13.b, with only 20% increase of storage requirement, SR, PR, and PSR provide 45%, 59%, and 56% of reduction, respectively, comparing to one without replication (2.39 seconds). This implies that the system does not require a huge amount of extra space to achieve a smaller startup latency to meet the latency criteria of many applications. While PR and PSR provide the shortest startup latency, they require more memory space because of increased number of buffers for request migration. Thus, for a cost-effective solution, SR works fine without any increase in system cost, especially for applications with highly skewed access frequency.