| Name | Last modified | Size | Description | |
|---|---|---|---|---|
| Parent Directory | - | |||
| COP5641 Ryan & Jin.tgz | 19-Jun-2006 17:01 | 529K | ||
| Makefile | 14-Jun-2006 15:52 | 447 | ||
| blockdev.c | 15-Jun-2006 02:26 | 12K | ||
| blockdev.h | 14-Jun-2006 15:52 | 1.4K | ||
| cache.c | 15-Jun-2006 03:22 | 19K | ||
| cache.h | 15-Jun-2006 02:14 | 2.6K | ||
| cache.htm | 18-Jun-2006 14:44 | 16K | ||
| cache_files/ | 18-Jun-2006 14:47 | - | ||
| csetup | 14-Jun-2006 15:52 | 84 | ||
| differ | 14-Jun-2006 15:52 | 241 | ||
| final_perf_output | 18-Jun-2006 14:35 | 718 | ||
| get | 14-Jun-2006 15:52 | 72 | ||
| hashtable.c | 14-Jun-2006 15:52 | 5.3K | ||
| hashtable.h | 14-Jun-2006 15:52 | 1.4K | ||
| 19-Jun-2006 17:00 | 2.2M | |||
| multiprocess | 14-Jun-2006 15:52 | 900 | ||
| net.c | 14-Jun-2006 15:52 | 8.6K | ||
| net.h | 14-Jun-2006 15:52 | 1.9K | ||
| netclient.c | 14-Jun-2006 15:52 | 20K | ||
| netclient.h | 14-Jun-2006 15:52 | 2.5K | ||
| off | 14-Jun-2006 15:52 | 58 | ||
| on | 14-Jun-2006 15:52 | 52 | ||
| onc | 14-Jun-2006 22:08 | 70 | ||
| ons | 14-Jun-2006 15:52 | 102 | ||
| p4_final_report.doc | 18-Jun-2006 14:36 | 192K | ||
| p4_final_report.odt | 18-Jun-2006 14:39 | 113K | ||
| povanemscript | 14-Jun-2006 15:52 | 774 | ||
| povlocalscript | 14-Jun-2006 15:52 | 194 | ||
| povrescript | 14-Jun-2006 15:52 | 111 | ||
| put | 14-Jun-2006 15:52 | 75 | ||
| runsequences | 14-Jun-2006 15:52 | 254 | ||
| ryajin_cop5641_su06_final.tar.gz | 19-Jun-2006 17:01 | 624K | ||
| serial | 14-Jun-2006 15:52 | 40 | ||
| server.c | 14-Jun-2006 15:52 | 14K | ||
| server.h | 14-Jun-2006 15:52 | 2.5K | ||
| sort | 14-Jun-2006 15:52 | 13K | ||
| sort.cpp | 14-Jun-2006 15:52 | 4.7K | ||
| sortscript | 14-Jun-2006 15:52 | 1.5K | ||
| sortscript2 | 14-Jun-2006 15:52 | 736 | ||
| trace1.jpg | 14-Jun-2006 16:00 | 81K | ||
| trace2.jpg | 14-Jun-2006 16:01 | 69K | ||
| trace3.jpg | 14-Jun-2006 16:01 | 73K | ||
| trace4.jpg | 14-Jun-2006 16:02 | 68K | ||
| trace5.jpg | 14-Jun-2006 16:04 | 70K | ||
| trace6.jpg | 14-Jun-2006 16:04 | 80K | ||
| trace7.jpg | 14-Jun-2006 16:05 | 93K | ||
| vars.c | 14-Jun-2006 15:52 | 2.6K | ||
| vars.h | 14-Jun-2006 15:52 | 3.9K | ||
| vpi.h | 14-Jun-2006 15:52 | 1.9K | ||
Linux Kernel & Device Driver Programming
Cache Optimization for Adaptive Network Memory Engine
(Anemone)
COP5641 - 1
Ryan Woodrum & Jin Qian
Project Objectives:
l Design a cache management policy for anemone pseudo block device
l Improve cache hit rate ( currently hit rate is up to 5% )
Project Progresses:
|
June 2,2006 |
We discussed with Dr. Kartik after class about background of anemone project and possibilities to improve the cache management policy. |
|
June 4,2006 |
We had a further discussion about details on how to improve the cache performance. Base on our discussion, we proposed two policies. 1. Logically separate cache into a smaller write buffer and a read cache. Sizes of them are configurable so that we can evaluate for optimum division. 2. Replace the current LRU cache replacement policy using LRW-ended read cache replacement policy which we thought more suitable for network swapped storage. In addition to above two base improvements, we planed to address the following issue if time allows. 1. Solve page aging problem caused by our new cache replacement policy 2. Dynamically adjust write buffer/read cache size based on workload 3. Combine write buffer/read cache into a unified cache We planed to start reading code and to mark places we should modify as well as places we should write from scratch. |
|
June 5,2006 |
Reading code |
|
June 6,2006 |
Reading code |
|
June 8,2006 |
After some efforts reading anemone code and kernel swap daemon code, we both have questions and new ideas about our project. On today¡¯s meeting, we begin to finalize design and make programming decisions. We decide to replace current LRU policy to MRW policy and we intend to use workqueue for scheduling network requests from write buffer transfer and for read cache prefetch because we think we need to use sleep for network transfer in these two situations.
We decide to create a circular buffer for write and a cache for read. Particularly, we discussed the implementation of write buffer. There are two ways, array and link list. Considering the memory fragmentation problem caused by frequently allocating/de-allocating pages and the complexity of pointers, we prefer using array to store pages for write. Although this will incur some copy operations from write buffer to read cache, copies will not be frequent because they happen only on initial cache population based on our cache management policy. Another benefit of using circular buffer is locking between reader and writer as show in scullpipe assignment.
Another issue is cache interface. In current implementation, ¡°cache¡± code actually behaves like a page swap manager, which first checks LRU cache. If cache hits, it returns page to upper level, else it issues a network request to fetch page from remote server and send it to upper level. Ideally, cache itself should not care about the network transfer. It just return page if there is one or return null if not so. We are planning to change cache_add and cache_retrieve function into page_put and page_get, inside which they look up cache first and then network and then local disk. And we can focus on cache part and another team (Jian & Kyle) can focus on network and local disk. This requires coordination with them so we may talk with them tomorrow.
We also talk a bit about using time ticking and not swapping cached page to remote server to avoid cache aging problem. We also think about the way to disable swap daemon read ahead (meaning less for anemone because anemone is conceptually a random device so sequential read/write can not gain any benefit). |
|
June 9,2006 - June 15, 2006 |
We run the first version with delayed
write which schedule a write requests as soon as page write to buffer. After
trials and errors, we finally build a stable version but the result is
disappointing. ////////////////////////////////////////////////////////////////////////////////////
Huge amount of write data occupied the
cache most of time so read hit rate suffers. We changed our algorithm to add
prefetch when swapper requests to read some page in. In addition, we limit
the cache space size used as write buffer.
|