Starvation (computer science)

In computer science, resource starvation is a problem encountered in concurrent computing where a process is perpetually denied necessary resources to process its work.[1] Starvation may be caused by errors in a scheduling or mutual exclusion algorithm, but can also be caused by resource leaks, and can be intentionally caused via a denial-of-service attack such as a fork bomb.

When starvation is impossible in a concurrent algorithm, the algorithm is called starvation-free, lockout-freed[2] or said to have finite bypass.[3] This property is an instance of liveness, and is one of the two requirements for any mutual exclusion algorithm; the other being correctness. The name "finite bypass" means that any process (concurrent part) of the algorithm is bypassed at most a finite number times before being allowed access to the shared resource.[3]

Scheduling

edit

Starvation is usually caused by an overly simplistic scheduling algorithm. For example, if a (poorly designed) multi-tasking system always switches between the first two tasks while a third never gets to run, then the third task is being starved of CPU time. The scheduling algorithm, which is part of the kernel, is supposed to allocate resources equitably; that is, the algorithm should allocate resources so that no process perpetually lacks necessary resources.

Many operating system schedulers employ the concept of process priority. A high priority process A will run before a low priority process B. If the high priority process (process A) blocks and never yields, the low priority process (B) will (in some systems) never be scheduled—it will experience starvation. If there is an even higher priority process X, which is dependent on a result from process B, then process X might never finish, even though it is the most important process in the system. This condition is called a priority inversion. Modern scheduling algorithms normally contain code to guarantee that all processes will receive a minimum amount of each important resource (most often CPU time) in order to prevent any process from being subjected to starvation.

In computer networks, especially wireless networks, scheduling algorithms may suffer from scheduling starvation. An example is maximum throughput scheduling.

Starvation is normally caused by deadlock in that it causes a process to freeze. Two or more processes become deadlocked when each of them is doing nothing while waiting for a resource occupied by another program in the same set. On the other hand, a process is in starvation when it is waiting for a resource that is continuously given to other processes. Starvation-freedom is a stronger guarantee than the absence of deadlock: a mutual exclusion algorithm that must choose to allow one of two processes into a critical section and picks one arbitrarily is deadlock-free, but not starvation-free.[3]

A possible solution to starvation is to use a scheduling algorithm with priority queue that also uses the aging technique. Aging is a technique of gradually increasing the priority of processes that wait in the system for a long time.[4]

See also

edit

References

edit
  1. ^ Tanenbaum, Andrew (2001). Modern Operating Systems. Prentice Hall. pp. 184–185. ISBN 0-13-092641-8.
  2. ^ Herlihy, Maurice; Shavit, Nir (2012). The Art of Multiprocessor Programming. Elsevier. p. 24. ISBN 9780123977953.
  3. ^ a b c Raynal, Michel (2012). Concurrent Programming: Algorithms, Principles, and Foundations. Springer Science & Business Media. pp. 10–11. ISBN 978-3642320279.
  4. ^ Galvin, Peter (2010). Operating System Concepts. Wiley India Edition. p. 193. ISBN 978-81-265-2051-0.
  NODES
design 1
see 2