so after line no. 21021, declare these counters. Now add hit_counter as I have said in the previous doc. And the time should be incremented each time any function in cache.c is called. So you must add variable time at 21070 & 21035 after having written the code ++hit_counter & Time_requiredtofree_block++. Remember, you have to add printf statements wherever you make an increment to these counters. Initialize hit_counter & time to zero (this is important).
Well, I had written two 'C' files named fil1.c and fil2.c, which added and subtracted two numbers. When I ran this statement on bochs, I got the output of the printf statements. The tests of performance was a two step process. In the first part, don't make any changes to the Block_size and the Hash table, but include the variables hit_counter and time. Run the command and see the results. Then increase the BLOCK_SIZE & the Hash table size and then run the same command again. You will definitely find a change in the results.
The answer to this lies in question 3. The first set of values was obtained for the variables hit_counter and Time_requiredtofree_block, when the value of BLOCK_SIZE and size of the hash table was 1024. The second values were obtained when the size of BLOCK_SIZE & hashTable was increased.
I have found out a few more points which you can use...
The second values were obtained when the size of BLOCK_SIZE & hashTable was increased.
5) fifth it says initialize the variables how and where
Initialize hit_counter & time to zero at line no. 21021, where you declare them.
FEW MORE FINDINGS
I have found out a few more points which you can use for your presentation. This is regarding the Lru chain, which can be completely discarded. Instead we can use a circularly linked double linked list.
The figure is a bit crude, but I think it can deliver the idea. Frst, the pivot is connected to the hash table entry. The FRONT is the end which contains those blocks that are least needed and the REAR contains those blocks that are expected in the near future (same as LRU). Now, this linked list has the added advantage at the time of reading the next block from this chain unlike reading from the disk thru I/o. in this, the code can be written such that the REAR is used as fast as the FRONT will be accessed. This will ensure that the performance of the cache is enhanced further over the LRU scheme. The reason being that under LRU, for accessing a REAR, it has to traverse to the end, while here, we could use sioimple logic (like a flag where if flag=1 go along front i.e. clockwise from pivot or if flag=0, go along rear from pivot i.e. anti-clockwise). Believe me, this will speed up the cache. It is extremely useful, when under our modified code, contiguous blocks will be accessed from the disk. So the OS will perceive that the next necessary block will be the contiguous one and it will place it at the REAR. Then accessing under theis scheme will be much faster than the one under the LRU.