Katana VentraIP

Synchronization (computer science)

In computer science, synchronization is the task of coordinating multiple of processes to join up or handshake at a certain point, in order to reach an agreement or commit to a certain sequence of action.

Not to be confused with Data synchronization.

Minimization[edit]

One of the challenges for exascale algorithm design is to minimize or reduce synchronization. Synchronization takes more time than computation, especially in distributed computing. Reducing synchronization drew attention from computer scientists for decades. Whereas it becomes an increasingly significant problem recently as the gap between the improvement of computing and latency increases. Experiments have shown that (global) communications due to synchronization on distributed computers takes a dominated share in a sparse iterative solver.[2] This problem is receiving increasing attention after the emergence of a new benchmark metric, the High Performance Conjugate Gradient(HPCG),[3] for ranking the top 500 supercomputers.

Mathematical foundations[edit]

Synchronization was originally a process-based concept whereby a lock could be obtained on an object. Its primary usage was in databases. There are two types of (file) lock; read-only and read–write. Read-only locks may be obtained by many processes or threads. Readers–writer locks are exclusive, as they may only be used by a single process/thread at a time.


Although locks were derived for file databases, data is also shared in memory between processes and threads. Sometimes more than one object (or file) is locked at a time. If they are not locked simultaneously they can overlap, causing a deadlock exception.


Java and Ada only have exclusive locks because they are thread based and rely on the compare-and-swap processor instruction.


An abstract mathematical foundation for synchronization primitives is given by the history monoid. There are also many higher-level theoretical devices, such as process calculi and Petri nets, which can be built on top of the history monoid.

which protect access to global resources (critical section) on uniprocessor systems;

interrupt masks

which prevent, in multiprocessor systems, spinlocking-thread from being preempted;

spinlocks

which act like mutexes, semaphores, events, and timers.

dynamic dispatchers

at IBM developerWorks

Anatomy of Linux synchronization methods

by Allen B. Downey

The Little Book of Semaphores

Need of Process Synchronization