Monday, October 6, 2014

Techniques to optimize disk block accesses

Various techniques to optimize disk-block accesses / What are the techniques used for optimizing disk block accesses? / Overview of optimization of disk-block access


Techniques to optimize disk-block accesses

A block is a contiguous sequence of sectors in a single track of one platter of a hard disk. The data that are stored in the hard disk are stored in disk blocks. Based on the size the data may be stored in one or several blocks. Reading or writing data by specifying exact address is a difficult task. Hence, the operating systems perform reading or writing through the use of disk blocks. One reason is, the blocks are fixed in size and that size is fixed for an operating system. Hence, transferring blocks from disk or to disk is easier than specifying exact location.
The access to data on disk is several orders slower than the access to the data stored in RAM. Though the access is slow in case of transferring data from disk, we would use several techniques to boost that up a little faster.
The techniques are,
Buffering of blocks in memory (memory means RAM, main memory hereafter) – the major goal of a DBMS is to minimize the number of disk block transfers between the memory and the disk. Buffering involves allocation of memory space in the main memory to hold some of the blocks which can be retained for future use in case if that blocks are needed frequently. We want to do this for as many blocks as possible. There are several ideas to choose the important or frequently needed blocks. These are done by the Buffer Manager.
Scheduling – it is about transferring the data in the order in which they are stored. The idea is to reduce the disk-arm movement. For example, let us suppose that there are several block requests initiated. And at that time the disk read-write head is exactly over the innermost track of a disk platter. Now the idea is to move disk arm towards the outermost track so that we can transfer the required blocks on the way instead of accessing back and forth. Assume that we have tracks 1 to 100, where 1st track is the innermost and the 100th is the outermost. Assume that the requests made for data stored at tracks 7, 25, 50, 75 in the order 7, 50, 75, 25. If we consider that the disk read-write head is at the innermost track, then it can move toward the outermost by accessing track 7 followed by 25, 50 and 75. [This is actually against the actual request. That is request for tracks 50 and 75 are generated before 25. But all the requests were generated while the disk arm is at the innermost track]. This actually reduces the disk arm movement.
File Organization – The data files can be stored in a contiguous manner to access them easier. For example, there are several disk platters in a hard disk which are arranged one over the other. If we are able to store a file in the same track of adjacent platters, we may need to move disk arm just once to read the whole file from the disk. This type of file organization is followed by many systems. For example, the utility Disk Defragmentation provided in Windows is used for moving the contiguous blocks of file closer or to form a sequence.
Nonvolatile Write Buffers – main memory is volatile, ie., losses data due to power failure. For this case, we would use a Non-volatile Random Access Memory (NV-RAM) which is backed by the battery power. The pending buffer writes due to a system crash or sudden power failure can be easily handled using this technique.




No comments:

Post a Comment