Operating Systems Chapter 6

Chapter 6

35 cards   |   Total Attempts: 182
  

Cards In This Set

Front Back
What is a cooperating process
One that can affect or be affected by other processes executing in the system. Can either directly share a logical address space that is, both code and data or be allowed to share data only thru files or messages. The former case is achieved thru the use of threads. Concurrent access to shared data may result in data inconsistency.
Race condition
Where several proceses access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place. To guard against race condition we need to ensure that only one process at a time can be manipulating the variable counter. To make such a guarantee, we require that the processes be synchronized in some way.
Critical Section
ConsiThe important feature of the sys is that, when one process is executing in its critical section, no other process is to be allowed to execute in its critical section. No 2 processes are executing in their critical sections at the same time. The critical section problem is to design a protocol that the processes can use to cooperate. Ea process must request permission to enter its critical section.
Entry section (of code) and Exit section and Remainder section
The section of code implementing a request to enter critical section is called the entry section. The critical section may be followed by an exit section. The remaining code is the remainder section.
Solution to Critical Section Problem
Mutual exclusion - if process is exeg in its critical section, than no other processes can be executing in their critical sections.
ProBounded waiting - there exists a bound, or limit, on the no of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.
Preemptive Kernel
Allows a process to be preempted while it is running in kernel mode. PKs are especially difficult to design for SMP architectures, since in these environents it is possible for two kernel-mode processes to run simultaneously on different processors. Why would anyone favor preemptive kernel over a nonpreemptive one? A preemptive kernel is more suitable for real-time programming as it will allow a real time process to preempt a process currently running in the kernel. A PKmay be more responsive, since there is less risk that a kernel mode process will run for an arbitrarily long period before relinquishing the processor to waiting processes. This effect can be minimized by designing kernel code that does not behave in this way.
Nonpreemptive kernels
Does not allow a process running in kernel mode to be preempted. Obviously a nonpreemptive kernel is essentially free from race conditions on kernel data structures, as only one process is active in the kernel at a time.
Kernel mode process
Will run until it exits kernel mode, blocks, or voluntarily yields control of the cpu.
Peterson's solution
Peterson's algorithm (AKA Peterson's solution) is a concurrent programming algorithm for mutual exclusion that allows two processes to share a single-use resource without conflict, using only shared memory for communication. It was formulated by Gary L. Peterson in 1981.[1] While Peterson's original formulation worked with only two processes, the algorithm can be generalized for more than two. [2]
Synchronization Hardware
A lock to prevent race conditions by requiring that critical regions be protected by locks. That is a process must acquire a lock before entering a critical section; it releases the lock when it exits the critical section. The design is quite sophisticated. Hardware features can make any programming task easier and improve sys efficiency. The critical section problem could be solved simply in a uniprocessor environment if we could prevent interrupts from occurring while a shared variable was being modified. We could be sure that the current sequence of instructions would be allowed to exe in order wo preemption. No other instructions would be run, so no unexpected modifications could be made to the shared variable. This is often the approach taken by nonpreemptive kernels. This is not feasible in a multiprocessor environment.
Atomically
That is as one uninterruptible unit. We can use these special instructions to solve the critical section problem in a relatively simple manner.
In concurrent programming, an operation is linearizable, atomic, indivisible or uninterruptible if it appears to take effect instantaneously. Implementation details may be ignored by the user, except insofar as they affect performance. On the other hand, if an operation is not atomic, the user will also need to understand and cope with sporadic extraneous behaviour caused by concurrent operations, which by its nature is likely to be hard to reproduce and debug. Those familiar with the ACID properties of databases should note that, in concurrent programming, atomicity is an isolation level, strictly stronger than serializable. Databases have a different definition of the term atomicity.
Semaphore
In computer science, a semaphore is a protected variable or abstract data type that constitutes a classic method of controlling access by several processes to a common resource in a parallel programming environment. A semaphore generally takes one of two forms: binary and counting. A binary semaphore is a simple "true/false" (locked/unlocked) flag that controls access to a single resource. A counting semaphore is a counter for a set of available resources. Either semaphore type may be employed to prevent a race condition. On the other hand, a semaphore is of no value in preventing resource deadlock, such as illustrated by the dining philosophers problem. The semaphore concept was invented by the Dutch computer scientist, Edsger Dijkstra[1], and has found widespread use in a variety of operating systems.Semaphore values are never negative.
Counting semaphore
Can range over an unresricted domain. Counting semaphores can be used to control access to a given resource consisting of a finite number of instances. The semaphore is initialized to the number of resources available. Each process that wishes to use a resource performs a wait op on the semaphore thereby decrementing the count. When a process releases a resource, it performs a signal op incrementing the count. When the count for the semaphore goes to 0, all resources are being used. After that processes that wish to use a resource will block until the count becomes greater than 0.
Binary semaphore
Can range only btw 0 and 1. On some systems, binary semaphores are known as mutex locks, as they are locks that provide mutual exclusion. We can use binary semaphores to deal with the critical section problem for multiple processes. The n processes share a semaphore, mutex, initialized to 1.
Busy waaiting aka spinlock
The main disadvantage of the semaphore definition given here is that it requires busy waiting. While a process is in its critical section, any other process that tries to enter its critical section must loop continuously in the entry code. This continual looping is clearly a poroblem in a real multiprogramming sys. Busy waiting wastes cpu cycles that some other process might be able to use productively. This type of semaphore is also called a spinlock because the process spins while waiting for the lock. Spinlocks do have an advantage in that no context switch is required when a process must wait on a lock, and a context switch may take considerable time. Thus, when locks are expected to be held for short times, spinlocks are useful; they are often employed on multiprocessor systems where one thread can spin on one processor while another thread performs its critical section on another processor.