Midterm Examination #2 COP 4610 - Operating Systems

This is a closed-book examination. You are not allowed to use any notes or other references during the exam. You will have 50 minutes to answer all the questions. Answer the true/false and short-answer questions on this paper. For other questions, write as much of your answer as will fit in the space provided below the question. If your answer does not fit entirely in the space below the question, use the blank paper provided to you to complete your answer. The exam will be graded on a scale of 100, but it is possible to achieve a score of up to 120. The first 10 numbered divisions of the exam are each worth 10 points. The last question requires you to explain your solution to Programming Assignment #3, and is worth 20 points. You will probably find some questions are easier for you than others. Therefore, it is important to budget your time. Do not miss the chance to answer some easier questions by puzzling too long over a question that is giving you difficulty. Remember to write your name on every page, and turn in all pages of the exam.

  1. Answer each question by marking a "T" or "F" next to the statement, inside the parentheses on the left.
    1. (F)External fragmentation is a problem with paged memory systems.

      There can be no external fragmentation, because the memory is allocated in equal-sized blocks.

    2. (T)Internal fragmentation is a problem with paged memory systems.
    3. (F)Memory compaction is employed to counteract internal fragmentation.

      Compaction is only used to counteract external fragmentation.

    4. (T)Memory compaction is employed to counteract external fragmentation.
    5. (F)Dynamic relocation requires memory compaction.

      Paging systems do dynamic relocation.without any need for compaction.

    6. (T)Memory compaction requires dynamic relocation.
    7. (F)First-Fit is a replacement algorithm.

      It is a placement algorithm.

    8. (F)Best-Fit generally causes less fragmentation than First-Fit.

      Best-Fit is well known to be suffer badly from fragmentation, because it makes the fragment sizes very small.

    9. (T)A function of a linker is to combine several object modules into a single load module.
    10. (F)A function of a linker is to replace absolute references in an object module by symbolic references to locations other modules.

      The reverse is true.

  2. Answer each question by marking a "T" or "F" next to the statement, inside the parentheses on the left.
    1. (F)Demand paging is placement policy.

      It is a fetch policy.

    2. (F)Hashing is a fetch policy.

      Hashing is a table-lookup technique, and also an information hiding technique.

    3. (F)The last process activated is most likely to have its entire working set resident.

      On the contrary, when a process is first activated it will encounter a cluster of page faults until its full working set has become resident.

    4. (T)The process that caused the most recent page fault is a good choice to suspend (swap the entire process out) when suspension is called for.
    5. (F)The 50% criterion is a rule used to decide which page to replace.

      This rule is applied for load control to decide whether we can admit another process, and whether we need need to suspend a process.

    6. (T)Fixed allocation with global replacement is not a workable combination for a resident set management scheme.

      Global replacement implies a process can take a page frame from another process. That would increase the allocation of the process, so it must be a variable allocation scheme.

    7. (T)The term TLB stands for Translation Lookaside Buffer.
    8. (T)The TLB is a cache for page table entries.
    9. (T)Without paging, a segmented memory system suffers external fragmentation.
    10. (T)Memory segmentation can be used for protection, allowing different privileges to a process for different memory segments.
  3. Explain the differences between the meanings of the terms logical address, relative address, and physical address.

    logical
    reference to a memory location that is independent of the current location of the process code and data in memory; translation must be made to the current physical address
    relative
    a special case of logical address, expressed as a location relative to some known "zero" point, such as the start of the memory occupied by the process
    physical
    the absolute address or actual location in main memory

  4. Consider the Buddy System applied to a range of memory starting with address 0.
    1. If a block of size 16 has address 0000111100100000 what is the address of its buddy?

      0000111100110000

    2. In general, how is the address of the buddy of the block of size 2k whose address is x. computed?

      The simplest way to say this would be "Flip (or toggle, or invert, or complement) the (k+1)st bit of x".

      The answer is not "x + 2k". That would get the right value in the (k+1)st bit, but if the bit was previously 1, there would be a carry to the (k+2)nd bit, which is incorrect.

    3. Cite an example of an application of the Buddy System in an operating system.

      The example given in your text is the allocation of kernel memory in the Unix family (including Linux) of operating systems. The key here is that the Buddy system allows us to allocated contiguous ranges of addresses of different sizes.The kernel needs to allocate memory in blocks of smaller than a full page in size, so it needs such a scheme. It cannot rely on the page allocation mechanism alone.

  5. What is thrashing? How might on OS detect that thrashing is occurring? How would it show up in the page fault frequency? How would it show up in the average time between page faults? What can an operating system do to deal with thrashing when it is detected? (Use separate paper.)

    Thrashing is a condition in which the system is getting very little useful work done because of an exessive rate of page faults. That is, most of the time the CPU is idle because all the processes are waiting for pages to be swapped into main memory. The system might detect this by looking at the paging disk activity and the CPU activity. If the paging disk is busy more than 50% of the time, that would be a sign that we have a thrashing problem. If the CPU is idle much of the time, but there are several processes waiting for page fault service that would be another indication. We can also use the page fault frequency or the average time between page faults. With thrashing, the page fault frequency will be higher than normal, and the average time between page faults will be less than the time it takes to serve a page fault. When thrashing is detected, the system must reduce the level of multiprogramming, by swapping out one or more processes, and then allow the resident sets of the resident processes to grow.

  6. Define and contrast the terms resident set and working set.
    Resident set
    The set of pages (of a process) that are resident in main memory.
    Working set
    Intuitively, this is the set of pages that a process needs to have in main memory to run without page faults, for some time interval. The formal definition is the set of pages that a process references during the time window (t-D, t), for some time t and some window size D.

    The process will have no page faults so long as the working set is the same as the resident set.

  7. Consider a paged virtual memory system. Suppose the page table for the process currently executing on the processor looks like the following. All numbers are decimal, everything is numbered starting from zero, and all addreses are memory byte addresses. The page size is 1024 bytes.

    Virtual page numberValid bitReference bitModify bitPage frame number
    01103
    1000-
    21017
    31110
    41102
    5000-
    What physical address, if any, would each of the following virtual addresses correspond to? (If there would be a page fault, just indicate that one would take place. Do not try to handle it.)
    1. 1052

      1052 = 1024 + 28, so the page number is 1 and the offset is 28. The page table shows that page 1 is not currently resident, so there would be a page fault.

    2. 2221 2221 = 2 * 1024 + 173 , so the page number is 2 and the offset is 173. The page table shows that page 2 is resident in frame 7, so the physical address would be 7 * 1024 + 173 = 7341.

  8. A process has four page frames allocated to it. (All the following numbers are decimal, and everything is numbered starting from zero). The time of the last loading of a page into each page frame, the time of last access to the page in each page frame, the virtual page number in each page frame, and the referenced (R) and modified (M) bits for each page frame are as shown (the times are in clock ticks from the process start at time zero to the event -- not the number of ticks since the event to the present).
    Virtual page numberPage frameTime loadedTime referencedR bitM bit
    20 6016411
    11 3016610
    0215016201
    33 2016311
    A page fault to virtual page 4 has occurred. Which page frame will have its contents replaced for each of the following memory management policies? Explain why in each case.
    1. FIFO (first-in-first-out)

      3. The first frame loaded was frame 3, loaded at time 20.

    2. LRU (least recently used)

      2. The least recently referenced frame is frame 0, referenced at time 162. (Note that the question asked for the page frame, not the virtual page number. For part (a), the page frame number and virtual page number happened to be the same, but for this part we have virtual page number 0 in page frame 2.)

  9. Continue the previous question.
    1. Clock. (Suppose the order of the frames in the circular buffer is the same as the order of the page frame numbers.)

      2. The only frame with a zero Referenced bit is frame number 2. The algorithm would run through frames 3, 0, and 1, setting their Referenenced bits to zero, before detecting the zero Referenced bit on frame 2.

    2. Optimal (Use the reference string: 4, 0, 0, 0, 2, 4, 2, 1, 0, 3, 2.)

      3. Pages 4, 0, 2, and 1 are referenced before page 3, so the optimal choice is to put page 4 into the frame of virtual page 3. (Note that it happens that page 3 is in frame 3, so the answer is three, even though the question asks for the frame number.)

  10. Assuming a page size of 2K bytes and that a page table entry takes 4 bytes, how many levels of page tables would be required to map a 32-bit address space, if the top level page table fits into a single page? Explain why.

    Each page is of size 2048 = 211, so 11 bits are used for the offset. This leaves 21 bits to specify the page, so we have 221 pages. Each page holds 2048/4 = 211/22 = 29 entries, so the top level can address 29 pages. Going to two levels allows us to address only 29*29 = 218 pages. We need to go to three levels, which allows us to address 29*29*29 = 227 > 221 pages.

  11. (counts double = 20 pts) Explain in words how you implemented deadlock detection for Programming Assignment #3. Your answer should cover at least: what is the deadlock criterion you check for (i.e. how do you tell if there is a deadlock?); what components (if any) you added to the pthread_t and pthread_mutex_t structs; in what function(s) those components are initialized; in what function(s) they are updated; how an avoided deadlock is reported back to the application program.

    A person could have earned most of the points on this question without having actually completed the assignment, if they had read and understood the statement of the assignment, which says:

    1. Arrange to record which mutex a thread is waiting for, when it is waiting for a mutex. This will involve adding fields to the thread and/or mutex structures, and adding code to initialize and update these new fields.
    2. When a thread attempts to lock a mutex, chain forward, using the owner field of locked mutexes and the information you have recorded about which mutex each thread is waiting for, until you either find the end of the chain, or cycle back to the current thread. If this detects a cycle return EDEADLK; otherwise, go ahead with normal mutex lock operation processing.

    It also told you which files and functions to modify:

    • mutex.h 2 new lines of code, in mac_mutex_lock and mac_mutex_unlock
    • mutex.c 45 new lines of code, in pthread_mutex_lock, pthread_mutex_unlock, pthread_mutex_init, and a new subprogram
    • pthread.h 1 new line of code in the declaration of struct pthread
    • pthread.c 2 new lines of code, for data structure initialization, in pthread_init and pthread_create

    In grading, I looked for the following:

    1. Deadlock criterion: As mentioned above, you should be checking for a cycle in the alternating chain of wait-for and hold relationships.
    2. Components added to structs: You need to add a component to the pthread_t structure to record which mutex (if any) the thread is waiting for. You do not need to add anything to the mutex_t structure, since it already has a field to record the current holder (i.e., owner) of the mutex.
    3. Functions where new data components are initialized: As mentioned in above, this is in pthread_init and pthread_create.
    4. Functions where new data components are updated: pthread_mutex_lock and pthread_mutex_unlock.
    5. How deadlock is reported back: return EDEACLCK, as mentioned above.

T. P. Baker