Our project involves adapting/upgrading select kernel security functionality from an older kernel module known as St. Michael. Our kernel module, which we call Guardian Kernel Module (GKM), will be built for the 2.6 series of Linux kernels in a time period of two weeks and will include many of the security functionalities of St. Michael through a different implementation to accommodate the 2.6 kernels.
Project Design Presentation
Final Project Documentation
Final Project Presentation
June 14 - June 15, 2006:
Documentation, documentation, documentation!!! Louis and I have finally finished all of our documentation, and we have made additions to this website. Namely, the source tarball is available, as well as a few other goodies. Enjoy!
June 13 - June 14, 2006:
In these last few days Louis and I debugged our module-checking and spinlocks. It seems that the two problems went hand-in-hand, as the kernel timer would interrupt Louis' insert and delete module wrappers in the middle of their work. We attempted to solve this problem by using a special type of spinlock of the "bh" variety. This spinlock would lockout all software interrupts (like the type our kernel timer uses) while we are executing our critical sections.
Unfortunately, though, using that type of spinlock was not enough. Whenever init_module or delete_module is called, we needed to execute 3 things in specific order without being interrupted. One of those things is the call to the actual init_module or delete_module system call, and that call can possibly sleep. This prevents us from protecting the whole section with a spinlock. We decided to get around this issue by disabling the timer before this section, calling the actual init_module or delete_module system call, spinlocking the other 2 sections, then restarting the timer back up again. This solution seems to work, and we now have a stable (as far as we know) version of GKM ready for demonstration.
June 12, 2006:
Today I made a major project breakthrough and discovered how to cloak modules in the 2.6.x kernels. From the very beginning, Louis and I decided that it would be nice if we could cloak (hide) GKM, but we were unable to find any references on how to do this. I figured out a very simple way to do this today based on the way modules are stored in the kernel.
2.6.x modules are structs which contain specific information about each module. The module structs contain a struct "list head", also known as an element in a doubly-linked list. The kernel contains many specialized functions and macros to manipulate list heads, as it seems the developers encourage all kernel developers wanting to use a doubly-linked list to use list heads. These function headers and macros are contained in list.h.
It seems that whenever the kernel wants a list of currently-loaded modules, it traverses this doubly-linked list of modules. I figured out that if we remove our own module from the module linked list and reinitialize it's struct head pointers to point back to itself, the kernel no longer lists this module in /proc/modules (or lsmod, which reads /proc/modules). Yet, the kernel module is still loaded and running!! This also means that the kernel cannot find the module to remove it, as rmmod will only remove modules in /proc/modules.
I also made a couple more rootkit tests to test the functionality of the module-checking component of GKM that Louis is currently working on in the project directory GUARDIAN_ATTACKS. One test module will load and cloak itself. GKM should be able to notice this as a problem, as insert_module will have been called with no new corresponding module in the module linked list. I also made a test to write over a module's exit function in memory. GKM should notice this through its module md5 summing.
Hopefully tomorrow we will be able to finish our module functionality, which will mean that the majority of this project will be finished. Wednesday should be spent on preparing final documentation and our project presentation.
June 11, 2006:
Sarah thought of a neat little trick to access the kernel's internal module list today. Since we know the location of our module through the THIS_MODULE variable we can access the list through our module. We then get a pointer to the list head and after that we can access the list from this pointer. I have already set up the list initalization code and it adds any modules loaded before Guardian to our list and creates an MD5 sum for verification. I have the code in place to add any modules that are loaded after Guardian too. The way it will work is when ever a module is called, our init_module wrapper is called first, passes the module info to the kernel's init_module function and on success calls out module_add function. This will then check the list for any new additions and add it to our code. Its not ideal but it works. Tomorrow I plan to clean up a few loose ends and start on the checking function to verify the md5 sums of the modules. Whats nifity about this is that with this setup, we will automatically check the Guardian module for any changes too since it is also in our internal module list. One other small item was knocked off the list today. I managed to keep Guardian from unloading by placing a quick check in our delete_module wrapper. rmmmod makes a call to delete_module to unload a module. We created a wrapper for this function as part of our module tracking. Now, in this function when rmmod is called to unlaod Guardian we catch it and return -EBUSY so you cant unload the module. (Of course if you compiled forced unloading into your kernel this wont work.)
Today Louis and I were able to figure out how to access the kenel's protected list of modules. We figured out that if we create a module struct within our code, it is automatically added to the module linked list. This means that we are able to traverse the list, starting with our declared module. Louis and I now began work implementing module cloaking and md5 summing protection.
I also spent a few hours attempting to cloak Guardian itself. I was successful in changing the name and size of the module to 0 in /proc/modules, but I was unable to hide it. If we have extra time towards the end of the project, I'll try to revisit this problem.
June 10, 2006:
Today I got deeper into the kernel module loading code in an attempt to track what modules were being loaded and unloaded. There have been some significant changes in the way the 2.6 kernel loads modules from 2.4. Acording to Randy Russel, the kernel developer responsible for the 2.6 module management code, there was a decision to move most of the kernel loading functionality into Kernel space. This was done for securty and ease of codeing. In the past, there were two seperate function calls used to load and initialize a mondule. module_create moved the module from user space to kernel space and init_module was used to initialize the module. create_module returned a module pointer to the newly created module which is how St. Michael was getting access to the new module at load time. The original St. Michael lkm used a wrapper to catch this return and add the module to St. Michaels's internal list. The 2.6 kernel does away with this system call completely. 2.6 loads a module by calling init_module which in turn calls load_module which pulls the module from user space useing copy_from_user and initializes the module. At this time it does not appear that there is any way to access the address to the module useing this methode. update: After some more research I have decided that we cant access the module at creation useing init_module without hacking the kernel code itself which I am not willing to do at this time. I have found a function called query_module that you can call and get module informaiton. update2: After more research it appears that query_module is no longer in 2.6, actually it never was. It appears that in the kernel module code rewrite it was decided to drop this functionality and there is no replacement for it. You can access some of the information from /prov/modules, which is what lsmod does now, but that would require doing stuff in user space. I am kinda stuck on how to procede because it doesnt look like we will be able to access the modules as they are loaded without some serious gymnastics in user space which would require a complete rewrite of our code.
Today I was able to test Guardian's detection of modified system calls by creating a test kernel module that wrote over 8 bytes of a system call's image in memory. We also disussed what Guardian should do if it does detect modified system calls in memory, and Louis came up with a nice solution. We didn't want to be too destructive, yet the system may become very unstable (and even kernel panic) if we do nothing. Louis suggested repointing that system call's pointer to a null system call until the system administrator can take action. This way, depending which call was compromised, the system has a chance of staying up while totally locking out the attacker's use of that system call. I was able to implement this method directly into Guardian's code.
June 9, 2006:
Today Louis worked a little bit more on debugging his module_delete system call wrapper. There still seems to be some sort of bug when calling the original module_delete system call from within the wrapper, and hopefully we will figure out what it is tomorrow. It would be nice if Guardian would be able to somehow find out the system call interface ahead of time, because it seems to change frequently.
Today I made a breakthrough with my md5 summing code. I had found a md5.h in my header files, but for some reason my code would complain that the three md5 functions were still not defined. After looking online for free md5 solutions, I stumbled on the actual md5.c source code in my own linux source code tree. I was able to copy md5.c and md5.h directly into Guardian's source tree, and I now have access to md5's functions. I like this solution better, as it does not depend on the user having md5 compiled in their kernel or available as a module.
Md5 summing now protects each system call. My next step is to write up an exploit module to attempt to actually write over a system call in memory to see if Guardian properly catches it. This situation also presents a good design question: what should Guardian do if it finds a modified system call? I really don't think there is a good answer. St. Michael would actually attempt to reload the running kernel in memory and reboot the machine, but that seems especially destructive for a production system. I'm leaning towards Guardian doing nothing but reporting the problem, and leaving the solution up to the system administrators in charge of the machine.
On another happy note, I also found a string.h library inside of my include/linux directory! I now have access to all my old friends, like strncpy and strncmp! (Yes, I'm such a nerd these days that strncpy and strncmp are my friends.) This should make coding a lot easier down the road.
June 8, 2006:
We made good progress today on understanding how the Kernel actually loads modules in 2.6 opposed to the process in previous version. In St. Michael they create a wraper for the syscalls module_create, module_delete, module_init and exit to track the addition and removal of modules from the system. The kernel's internal pointers inthe syscall table are changed so that they point to the wraper functions which keep track of module changes then call the Kernel's original functions. We had planned to take this route but after some more research we found that 2.6 has depreciated the create_module syscall. It's still listed in the table but points to a null function. After some discussion we decided that we really only needed to create wrapers for the module_init and module_delete syscalls. Currently, we have the module_init wraper working correctly and can load modules but the module_delete still bombs out. There were some changes to what variable are passed to the two modules that appear to be causing the problems. Once these problems are fixed we can set up the functions to add and delete modules from our internal link list.
June 7, 2006:
Today is the birthday of this website! We decided it would be the best if I hosted it on my server, as this is where the CVS repository is at. I also set up CVSWeb so that we could graphically view our CVS repository.
Today I was able to finish coding Guardian's system call mapping functionality. The 2.2 and 2.4 Linux kernels exported a kernel symbol called "sys_call_table", which was a pointer to a list of pointers pointing to system calls. This table is often abused in kernel rootkits because all an attacker has to do is change one of those pointers to point to a new system call function of his or her choosing. This symbol is not exported in the 2.6 kernel, yet it can still be found and modified with a little bit of hacking. Guardian, upon load, makes an internal copy of this system call table and checks the kernel's version against its own twice every second. If a pointer has been modified, Guardian will reload the kernel's system call table with its own copy.
To be sure this new piece of functionality was working, I also had to test it! This gave me the chance to write my first, very own kernel rootkit in which I replace the chdir system call (boy, wouldn't that be annoying!) with a silly printed message. This test module is now included in the GUARDIAN_ATTACKS directory.
After setting up the timer I have started working on remaping the modules init, create, exit and remove modle functions. In the original St. Michale this is acomplished through the use of function pointers. An internal pointer is used to acces the kernel's original functionality and the syscall table is rerouted to a wrapper function that we create. The wrapper function allows us to track what modules are loaded and unloaded. I have managed to set up the internal data structure, a kernel link list, to track what modules are loaded and unloaded. I have hit a roadblock with the wrappers due to problems passing the modules to load or unload to the kernels original functions.
June 5-6, 2006:
June 5th was the official first day of our project. We spent June 5th looking through the St. Michael LKM source code, getting the 2.4 kernel version of St. Michael to load and run on an older Linux box, building our design document, and preparing our first design powerpoint presentation for our class. We also designed our stub functions for our module (the Guardian Kernel Module).
We had some difficulty getting the original St. Michael LKM to work. We could not get the 0.12 version to compile at all on a 2.4.20 stock kernel, yet we could get the 0.11 version to compile if and only if we hard-coded gcc-2.95 as our compiler into the Makefiles. We were also unable to establish contact the lead developer of the St. Michael and St. Jude LKM's.
On June 6th we set up our project CVS server and started work on our first Guardian functionalities. Louis was able to get the kernel timer function to work correctly and started work on Guardian's module-checking functionality. Sarah started work on the system call mapping functionality but ran into a strange Makefile problem. Whenever she attempted to compile more than one *.c file into the module, it would compile and load but be broken. After hours of debugging and google, we finally found out that we could not name our module after one of our *.c files if multiple files were being compiled into the module.