Kernel Drivers Final Project: Anemone Driver Reliability
We make the serial cable work!!!
In kernel debugging, you could always meet with kernel panic. When panic, the most important is to get the panic core dump information, but the screen is easy to get stuck which keeps you from getting the information, and you can not rely on /var/log/messages to get those message. We have to figure out a way to get the panic core dump information. The solution is to use serial console, to make serial cable work, the following steps should be followed: 1.Connect the machines with serial cable, suppose the machines we want to run our program is A which is easy to panic, and the other machine which we want to see the panic core dump is B.
2. Edit /etc/inittab on A Add line like: Co:2345:respawn:/sbin/agetty ttyS0 9600 vt100
3.Download uucp source code, untar on B ./configure;make;make install
4.Login A on B(better X windowns console) via this command line:
cu –l /dev/ttyS0 -s 9600 --parity no
setting the speed is tricky here, if it is suitable with the cable speed, you will get garbage information on the
screen of B, if you do not know it ,you have to try within the range of 4800144000, as for mine, it got it right at
Then you can start running your program on A.
Tagged Hashtable design:
Mapping Hashtable take the page offset and tells which server it is stored.
Server1 Address< ------------- >Page offset
Server2 Address< ------------- >Page offset
Now the item numbers of map hash table is 170,000, each entry has 16 bytes(hash table using memory pool to allocate
entry space,64 bit machine will have 24 bytes), the total size is 2.72Mbytes.
If we use another hash table to store the second map information of the server address with the page offset,it will
take another 2.72Mbytes, which is unbearable for small memory client.
We map the page offset with server tag instead of server address in the tagged hash table, which tells you the server
tag when you lookup the offset.
- define _TAG_INIT 0x0001;
Server1->tag = __TAG_INIT<<0
Server2->tag = __TAG_INIT<<1
Servern->tag = __TAG_INIT<<n-1
For example, if you choose server1 and server3 for offset a, first add their tag which equals 0x0005,and store the
mapping of 0x0005 with offset a in the hashtable.
In this way when searching for offset a in hashtable, we get tag value(0x0005),and look through the server list,
if tag& serverx->tag != 0
Solving the panic:
When the two pageout requests are sent out, they use the same packet struct which enframe the pageout sk_buff, when
the first server ack comes back, current logic will delete the packet while the second server still needs the packet.
We add a user count to packet struct, and not delete the packet until the user count is decreased to 0.
Also the problem was found with sk_buff, since sk_buff is shared by two servers on client side, when sk_buff was
sending out by network driver layer, the nic driver will take care to delete sk_buff, we need not do anything except
increasing user count of the sk_buff struct, so that when the first sk_buff is sent, it won’t be deleted.
I've been trying to figure out how to go about doing the local replication for the last few days and have found a big
problem: there's no easy way to write to a local disk!
The kernel api doesn't provide any functions for opening a file, and after reading around, its not good practice to
even try and do so as it may cause panics and other problems. The only way we may be able to get around it is by
directly interfacing with an existing block device we can replicate to (local swap partition, file-backed block
device, etc). We would basically have to stack our block device ontop of the real one and then do some magic with the
requests our bd gets and and pass them onto the real bd. This is similar to what raid and lvm drivers do and I think
may be fairly difficult to implement.
The book touches on how you could modify the bio struct in our custom request handler to point to a diferent device
and then simply return so that the request handler will try to run the bio on the device we changed it to. But how we
determine the device (we need a dev_t, not just a logical /dev entry) and what memory and concurrency issues that
might arise are still a mystery to me. And this is just for page outs. I don't know how we could generate our own bio
requests to page in from the device in the case of a network failure.
Debugging remote memory replication code.
One problem found,here is the problem description:
When server response comes back for the page out message,
and if the status show it is successfully paged out,the
client will delete the page out buffer while the second
server still needs the buffer.
Solution: 1.Do not delete the page out buffer until second least utilized
server response comes back.
2.When the second least utilized server response comes back,do not
call back to notify swap deamon,since it is already done when the
first reponse come back.
Finish coding the remote memory replication code.
Here is the link for the replication.
Group Discussion the design for local disk replication.
We decide not to touch the swap code.
Writing code for remote memory replication.
All the changes reside in netclient.c which is the network layer of the anemone system.
Client Server Block Device Block Device ^ ^ | | | | | | v | Cache | ^ | | | | | | | v V Network layer <------------->Network layer
Finish design for remote memory replication.
1.Add method that selects the second least utilized server
Add the second mapping table,store the mapping information of the server associate
2.Everytime When paging out,in transmit() functiion Select the second utilized server,page out to it,add
3.Suppose the least utilized server was down, When client try to page in the old page,
it cannot found the server in the server list since the server was deleted from the list when it fails