Midterm Examination #1 COP 4610 - Operating Systems & Concurrent Programming

This is a closed-book examination. You are not allowed to use any notes or other references during the exam. You will have 75 minutes to answer all the questions. Answer on this paper. Write as much of your answer as will fit in the space provided. If your answer does not fit entirely in the space below the question, use the blank paper provided to you to complete your answer. If you believe a question is ambiguous or a multiple-choice question has other than one correct answer, please write a note on the paper, explaining what you think is wrong with the question. The exam will be graded on a scale where 100 pts earns a grade of 'A', but it is possible to achieve a score of up to 130. You will probably find some questions are easier for you than others. Therefore, it is important to budget your time. Do not miss the chance to answer some easier questions later in the exam by puzzling too long over one of the earlier questions. Remember to write your name on every page, and turn in all pages of the exam.

The median score on this exam was 83 and the average was 84.

I. True/False: 50 questions, at one point each.

Answer each question by marking a "T" or "F" next to the statement, inside the parentheses on the left side of the question.

  1. ( T ) Interrupt-driven I/O requires the active intervention of the processor to transfer data between memory and an I/O module.
  2. ( F ) An I/O device that uses DMA I/O has no need to interrupt the proceessor.
  3. This is strictly true, but could be interpreted as false since most DMA devices do use interrupts to signal the processor when an I/O operation is complete, or when buffers are full/empty.

  4. ( F ) The following function is reentrant:
    char * int_to_string (int i) {static char buf[10]; sprintf (buf, "%10d", i); return buf;}
    (Note that without the "static", then function is worse than non-reeentrant. The effect is undefined, because it returns a pointer to an object in the function's activation record.)
  5. ( T ) A Unix process may contain one or several threads.
  6. ( F ) A Unix process may belong to several threads.
  7. ( F ) Each Unix process has its own copy of the files it is working on.
  8. ( F ) In a Unix system with multithreaded processes, each thread has its own protections/permissions to access system resources.
  9. ( F ) In a Unix system with multithreaded processes, each thread has its own mapping from file descriptors to open file descriptions.
  10. ( T ) Two Unix processes may simultaneously have open file descriptors that refer to the same open file description.
  11. ( T ) Two different open file descriptions may refer to the same file.
  12. ( T ) Mechanisms for mutual exclusion based on special machine instructions that atomically combine a pair of fetch and store cycles require busy-waiting if there is another process in the critical section.
  13. ( T ) Mechanisms for mutual exclusion based on special machine instructions that atomically combine a pair of fetch and store cycles can be used to implement multiple independent locks, i.e., a process in a critical section for one resource will not block a process that is in a critical section for a different resource.
  14. ( T ) The Peterson and Dekker software algorithms for mutual exclusion rely on the ability to atomically fetch a word from memory, and atomically store a word to memory.
  15. ( F ) The Peterson software algorithm for mutual exclusion is simpler to implement (e.g., less code, fewer steps, maybe less execution time overhead) than what Stallings calls hardware-based mutual exclusion mechanisms.
  16. ( T ) If mutual exclusion is achieved by interrupt disabling, on a single-CPU system, no deadlock is possible due to this form of mutual exclusion.
  17. ( F ) One of the effects of the counting semaphore operation Stallings calls "wait" is to increment (means "add one to") the count of the semaphore.
  18. ( F ) An advantage of monitors over semaphores is that they do not require a lower-level mutual exclusion mechanism to implement them.
  19. ( F ) An advantage of POSIX mutexes over semaphores is that they are not subject to deadlock.
  20. ( F ) An advantage of busy-waiting over hardware-based mutual exclusion mechanisms is that a waiting process may be preempted.
  21. ( T ) In an SMP system, implementing mutual exclusion requires a combination of interrupt masking and busy-waiting.
  22. ( F ) An advantage of the exchange instruction over the test-and-set instruction is that it does not require busy waiting.
  23. ( T ) With a counting semaphore, as described by Stallings, when the count becomes negative (which can only happen as a result of a process performing a semaphore operation that decrements the count), the calling process is blocked.
  24. ( T ) pthread_cond_wait() should only be called in a while loop that checks what Stallings calls the "level 2 condition".
  25. ( T ) The effect of pthread_cond_wait() includes first unlocking the mutex, then putting the thread to sleep, and later relocking the mutex.
  26. ( T ) After return from a call to pthread_cond_wait(&C, &M) the calling thread can rely that the mutex M is locked.
  27. ( F ) A call to  pthread_cond_wait() cannot return until another thread has called pthread_cond_signal() for the same CV.
  28. ( T ) If one or more threads are sleeping on a call to pthread_cond_wait() and another thread calls pthread_cond_signal() for the same CV, at least one of the sleeping threads will wake up and try to relock the mutex.
  29. ( T ) If one or more threads are sleeping on a call to pthread_cond_wait() and another thread calls pthread_cond_signal() for the same CV, it may be that all of the sleeping threads will wake up and try to relock the mutex.
  30. ( T ) For a given mutex: if one or more threads are blocked on a call to pthread_mutex_lock() and the thread that is holding the lock calls pthread_mutex_unlock(), exactly one of the blocked threads will be unblocked and lock the mutex.
  31. ( T ) A difference between a binary semaphore and a POSIX mutex is that the process that sets a binary semaphore to 1 (unlocked) need not be the same process as sets the binary semaphore to zero (locked), but the thread that unlocks a mutex is required to be the same thread that locked it.
  32. ( T ) In a single processor system, all the threads execute in an interleaved fashion, sharing the CPU.
  33. ( F ) In Unix, each process has its own set of open file descriptions.
  34. ( F ) Each process has its own ready queue.
  35. ( F ) A context-switch between threads always requires saving and restoring all the control registers, such as the PSW (Program Status Word).
  36. ( T ) A context-switch between processes always requires saving and restoring all the control registers, such as the PSW (Program Status Word).
  37. ( T ) A context-switch between processes always requires a mode switch.
  38. ( F ) Thread creation and thread synchronization can be done with less execution time overhead if threads are implemented at the kernel level.
  39. ( T ) Thread switching does not require kernel privileges, so a library implementation of threads can implement a thread service call like pthread_mutex_lock() without the overhead of any context switches.
  40. ( F ) Thread switching requires kernel privileges, so even a library implementation of threads cannot implement a blocking thread service call like pthread_cond_wait() without doing kernel calls.
  41. ( F ) An advantage of user-level threads is that a process can speed up its execution by making use of more than one CPU.
  42. ( F ) The producer/consumer problem is a special case of the readers/writers problem, in which the producer is the writer and the consumer is the reader.
  43. ( T ) In the readers/writers problem we want to allow multiple readers to be in their critical sections for the shared data at the same time.
  44. ( F ) In the reader/writers problem we want to allow multiple writers to be in their critical sections for the shared data at the same time.
  45. ( F ) In the readers/writers problem, we want to allow a reader and a writer to be in their critical sections for the shared data at the same time.
  46. ( T ) In the producer/consumer problem only the producer ever has to wait.
  47. ( F ) If one thread of a process is holding the lock on a mutex, and another thread of the process attempts to lock the mutex, the whole process is blocked.
  48. ( T ) Once a system is in an unsafe state (as the term is used in the banker's algorithm) there is at least one sequence of process requests that will unavoidably lead to deadlock.
  49. ( F ) Once a deadlock is detected, the problem can be solved by suspending one or more of the deadlocked processes.
  50. ( T ) If a process performs a signal operation on a counting semaphore whose value is zero, and there are one or more other processes waiting on the semaphore, exactly one of the other processes will be unblocked.
  51. ( T ) In the process models described by Stallings, whether a process is blocked or not is independent of whether it is suspended or not.

II. Multiple choice: 20 questions, at two points each.

Answer each question by marking an "X" over the dot to the left of the best answer, except as noted for two questions below where there are more than one correct answer.

  1. The tendency of execution to involve a number of memory locations that are clustered, i.e., with addresses that are close together is called
  2. Which of the following kinds of storage are volatile? (Mark all the correct answers.)
  3. Which of the following is not true of deadlock avoidance, as in the banker's algorithm?
  4. When an external device detects an event that requires the attention of the operating system, it
  5. If we were to experimentally measure the performances of memory caches of a given fixed size, with different block sizes, starting with a very small block size (one byte) and working up through very large block sizes (the whole cache as one block), we would expect to see the hit ratio
  6. The main function of the dispatcher is to choose which
  7. Which of the following states is least likely to apply to a thread, in a system with multithreaded processes?
  8. In the Solaris operating system, the relationship between user-level threads (ULTs), kernel-level threads (KLTs), and lightweight processes (LWTs) is
  9. In the Linux operating system, the relationship between user-level threads (ULTs), kernel-level threads (KLTs), and lightweight processes (LWTs) is
  10. Which of the of the following would normally be a reason for blocking a process, but not suspending it? (Mark all the correct answers.)
  11. Which of the following mutual exclusion mechanisms is not adequate for an SMP system?
  12. Which of the following is not true of a monitor?
  13. If processes A, B, and C are all requesting resource R, but the operating system allows A and B to access R repeatedly while C continues to wait for R, that is called
  14. If processes A, B, and C are all requesting resource R, but the operating system allows A and B to access R repeatedly while C continues to wait for R, that is called
  15. Which of the following ways of dealing with deadlock applies best to the allocation of main memory resources in a modern operating system, which supports on-demand paging and/or swapping.
  16. Which of the following ways of dealing with deadlock applies best to mutexes and monitors?
  17. For which of the following kinds of resources would deadlock avoidance be most likely to work well?
  18. One algorithm simulates the future execution of the system, based on the assumption that every unblocked process will eventually complete and release its resources, to see if under this assumption all processes can complete. This is used for
  19. One algorithm simulates the possible future execution of the system, based on the assumption that every process will only complete after its declared maximum total resource allocations have all been satisifed, to see if the maximum future requests of all processes can be satisfied and so they will all be able to complete. This is used for

III. Other questions -- 6 questions, at five points each.

  1. List the four necessary conditions for deadlock, and give a one-sentence definition of each.

    There are only four correct answers for the conditions, but many equally good ways of wording the definitions. For example:

  2. Answer both parts.

  3. Suppose there is a (non-virtual memory) system that has only 200 Mb of main memory and there are two processes that make the following sequence of requests:

    P1              P2
    ...     ...
    Request 80Mb;     Request 70Mb;
    ...     ...
    Request 60Mb;     Request 80Mb;

    Can there be a deadlock? If not, argue why not. Otherwise, explain how the deadlock can happen, and then also explain how it could be avoided.

    There can be a deadlock. One way it can occur is if the execution of P1 and P2 is interleaved so that: (1) P1 requests 80Mb and it is granted; (2) P2 requests 70Mb and it is granted; (3) P1 requests 60Mb and it is granted; (4) P2 requests 80Mb and it is granted. At that point (and only then) there is a deadlock. Note that the deadlock does not occur until both processes are fatally blocked. Several people only carried the example up to step (2), and claimed there was a deadlock at that point. That is not so. After step (2) the system is in an unsafe state, and future deadlock is unavoidable, but it is not yet deadlocked.

    To avoid the deadlock, one can apply the Banker's Algorithm, which would cause P2 to block at step (2) above, until P1 releases its 80Mb allocation. Note that the Banker's algorithm does not make P2 wait to start until P1 has completed. It only blocks P2 at the request for 70Mb. Note also that several other techniques suggested by students, such as giving P1 all the memory, and making process request all their memory at once, are prevention techniques, not avoidance techniques.

  4. What is the output for one of the legal thread scheduling orders of the following program? Assume the correct #include's are provided and the program compiles and runs without error.

    pthread_mutex_t M;
    pthread_cond_t C;
    volatile int X = 0;
    
    void * body (void * arg) {
      pthread_mutex_lock (&M);
      fprintf (stderr, " 1/%d ", (int) arg);
      X++;
      pthread_cond_signal (&C); 
      while (X <= (int) arg) {
        pthread_cond_wait (&C, &M);
        fprintf (stderr, " 2/%d ", (int) arg);
      }
      fprintf (stderr, " 3/%d ", (int) arg);
      X++;
      pthread_mutex_unlock (&M);
      pthread_cond_signal (&C); 
    }
    
    int main() {
      pthread_t t1, t2;
      pthread_mutex_init (&M, NULL);
      pthread_cond_init (&C, NULL);
      pthread_create (&t1, NULL, body, 1);
      pthread_create (&t2, NULL, body, 2);
      pthread_join (t1, NULL);
      pthread_join (t2, NULL);
      return 0;
    }

    The main point of this question was to see if you understand how condition variables work. It is fairly easy to see that one thread will output a sequence of the form 1/1 ( 2/1 )* 3/1, and the other will output a sequence of the form 1/2 ( 2/2 )* 3/2, where the notation ( ... )* is intended as a regular expression, indicating zero or more repetitions of the text inside. The critical issue is that the way the condition variables are used imposes some constraints on the interleavings of these two sequences, namely:

    • Thread 1 cannot leave its while-loop until both thread have incremented X (making the value 2), so "2/1" must follow "1/2".
    • Thread 2 cannot leave its while-loop until thread 1 has incremented X the second time (making the value 3), so "2/2" must follow "3/1".

    Thus, for example, one possible output sequence is " 1/1 1/2 2/1 3/1 2/2 3/2".

    (I apologize for the typos in this question, which caused some distance students confusion. Therefore, those few students who did not see the corrections, and who wrote as their answer that the program will not compile, and so will not run, received credit. However, in the future, please be aware that if the point of a question with code is for you to find errors in it I will (as in question 6) make that clear. Otherwise, you should assume that the code is part of a compilable and then executable program. In this case I did compile and execute the code, as I always do for exam questions containing oce. I guess that I must have used "gcc q4.c" without the "-ansi -Wall -pedantic", and so the compiler did not catch the typos.)

  5. What is the output of the following program? Assume the correct #include's are provided. (Hint: The read (a, b, c) reads c bytes from the file designated by file descriptor a into the buffer designated by pointer b, and write (a, b, c) writes c bytes to the file designated by file descriptor a into the buffer designated by pointer b. These operations read and write raw bits, without respect to datatype.) Assume the program load module exists in a file called "q4" in the current working directory. Assume the program is initially called as "q4" with no command-line arguments, i.e., argc = 1. Of course, if the execve() call in the program is executed, the argc value after that will be determined by the relevant parameter to execve().

    int n = 0;
    int main(int argc, char *argv[]) {
      pid_t child;
      int status;
      int fd[2];
      char *myargv[] = {"q4", "stop", NULL};
      char buf[10];
      n++;
      if (argc > 1) {
        read (0, &n, sizeof (n));
        printf ("n = %d\n", n);
        exit (3);
      }
      pipe (fd);
      child = fork();
      if (child != 0) {
         write (fd[1], &n, sizeof (n));
         waitpid (child, &status, 0);
         n = WEXITSTATUS (status);
      } else {
        n++;
        dup2 (fd[0], 0);
        execve ("q4", myargv, NULL);
      }
      printf ("n = %d\n", n);
      return 0;
    }
    n = 1
    n = 3

    Assuming the first execution of "q4" has argc = 1, the original process will increment n from 0 to 1 and continue on to the fork() call without producing any output. It will write its current value of n (which is 1) to the pipe, and wait for the child to terminate. The child will increment its copy of n (to 2), redirect stdin to the pipe, and exec itself with the argument "stop". After the exec, the child process will start the program over from the beginning, with argc = 2. It will read into n the value (1) put into the pipe by the parent process, print "n = 1\n", and exit with value 3. The parent will wake up, and store into its copy of n the exit status value (3) provided by the child, print "n = 3\n", and exit.

  6. Suppose the following code is taken from the consumer side of a producer-consumer pair. Suppose the compilation context of the code is correct, so there are #includes for all necessary headers, and all variables are declared and initialized correctly. In particular, assume "char *shared_buf[NITEMS};" is declared and NITEMS is an integer > 1, as in the examples covered in class.

    char *local_buf  = (char *) malloc (N);;
    ...
    pthread_mutex_lock (&M);
    if while (next_out == next_in) {
       pthread_mutex_unlock (&M);
       pthread_cond_wait (&CV, &M);
       pthread_mutex_lock (&M);
    }
    strncpy (local_buf, shared_buf[next_out], N);
    next_out = (next_out + 1) % NITEMS;
    pthread_cond_signal (&CV, &M);
    pthread_mutex_unlock (&M);

    What is wrong with the code? List at least three distinct major defects in design and/or coding, and explain or show how to correct each defect. If possible, show corrections by crossing out and/or writing in bits of code above.If  it would take more than just a simple change to correct a problem, use a sentence to explain what needs to be done. (Don't waste time trying to write out a complete correct solution!)

    There are more than three errors. The green text inserted above is an approximate attempt to correct these errors, which we describe below:

    1. The pointer variable local_buf is never initialized.
    2. The call to pthread_cond_wait() has no loop to test for the secondary condition.
    3. The calls to pthread_mutex_unlock() and pthread_mutex_lock() around the call to pthread_cond_wait() are wrong, since waiting on a condition variable requires the mutex be held (already locked) by the calling thread, and on return from the wait operation the mutex is again held (locked) by the thread.
    4. There is danger of buffer overflow with strcpy. It should be replaced by a use of strncpy(). (Note that the proposed solution shown above is not complete, since it does not check to see whether the entire string was copied.)
    5. A call to pthread_cond_signal() is needed to wake up the producer, after next_out is incrementd. (Note that it would be wrong to say that the consumer needs to be awakened by this call, or to place the call inside the while loop or if statement.

    T. P. Baker