Katana VentraIP

Reference counting

In computer science, reference counting is a programming technique of storing the number of references, pointers, or handles to a resource, such as an object, a block of memory, disk space, and others.

In garbage collection algorithms, reference counts may be used to deallocate objects that are no longer needed.

The frequent updates it involves are a source of inefficiency. While tracing garbage collectors can impact efficiency severely via and cache line faults, they collect relatively infrequently, while accessing objects is done continually. Also, less importantly, reference counting requires every memory-managed object to reserve space for a reference count. In tracing garbage collectors, this information is stored implicitly in the references that refer to that object, saving space, although tracing garbage collectors, particularly incremental ones, can require additional space for other purposes.

context switching

The naive algorithm described above can't handle reference cycles, an object which refers directly or indirectly to itself. A mechanism relying purely on reference counts will never consider cyclic chains of objects for deletion, since their reference count is guaranteed to stay nonzero (cf. picture). Methods for dealing with this issue exist but can also increase the overhead and complexity of reference counting — on the other hand, these methods need only be applied to data that might form cycles, often a small subset of all data. One such method is the use of , while another involves using a mark-sweep algorithm that gets called infrequently to clean up.

weak references

In a concurrent setting, all updates of the reference counts and all pointer modifications must be , which incurs an additional cost. There are three reasons for the atomicity requirements. First, a reference count field may be updated by multiple threads, and so an adequate atomic instruction, such as a (costly) compare-and-swap, must be used to update the counts. Second, it must be clear which object loses a reference so that its reference count can be adequately decremented. But determining this object is non-trivial in a setting where multiple threads attempt to modify the same reference (i.e., when data races are possible). Finally, there exists a subtle race in which one thread gains a pointer to an object, but before it increments the object's reference count, all other references to this object are deleted concurrently by other threads and the object is reclaimed, causing the said thread to increment a reference count of a reclaimed object.

atomic operations

The main advantage of the reference counting over tracing garbage collection is that objects are reclaimed as soon as they can no longer be referenced, and in an incremental fashion, without long pauses for collection cycles and with clearly defined lifetime of every object. In real-time applications or systems with limited memory, this is important to maintain responsiveness. Reference counting is also among the simplest forms of memory management to implement. It also allows for effective management of non-memory resources such as operating system objects, which are often much scarcer than memory (tracing garbage collection systems use finalizers for this, but the delayed reclamation may cause problems). Weighted reference counts are a good solution for garbage collecting a distributed system.


Tracing garbage collection cycles are triggered too often if the set of live objects fills most of the available memory; it requires extra space to be efficient. Reference counting performance does not deteriorate as the total amount of free space decreases.[2]


Reference counts are also useful information to use as input to other runtime optimizations. For example, systems that depend heavily on immutable objects such as many functional programming languages can suffer an efficiency penalty due to frequent copies. However, if the compiler (or runtime system) knows that a particular object has only one reference (as most do in many systems), and that the reference is lost at the same time that a similar new object is created (as in the string append statement str ← str + "a"), it can replace the operation with a mutation on the original object.


Reference counting in naive form has three main disadvantages over the tracing garbage collection, both of which require additional mechanisms to ameliorate:


In addition to these, if the memory is allocated from a free list, reference counting suffers from poor locality. Reference counting alone cannot move objects to improve cache performance, so high performance collectors implement a tracing garbage collector as well. Most implementations (such as the ones in PHP and Objective-C) suffer from poor cache performance since they do not implement copying objects.[3]

Graph interpretation[edit]

When dealing with garbage collection schemes, it is often helpful to think of the reference graph, which is a directed graph where the vertices are objects and there is an edge from an object A to an object B if A holds a reference to B. We also have a special vertex or vertices representing the local variables and references held by the runtime system, and no edges ever go to these nodes, although edges can go from them to other nodes.


In this context, the simple reference count of an object is the in-degree of its vertex. Deleting a vertex is like collecting an object. It can only be done when the vertex has no incoming edges, so it does not affect the out-degree of any other vertices, but it can affect the in-degree of other vertices, causing their corresponding objects to be collected as well if their in-degree also becomes 0 as a result.


The connected component containing the special vertex contains the objects that can't be collected, while other connected components of the graph only contain garbage. If a reference-counting garbage collection algorithm is implemented, then each of these garbage components must contain at least one cycle; otherwise, they would have been collected as soon as their reference count (i.e., the number of incoming edges) dropped to zero.

Dealing with inefficiency of updates[edit]

Incrementing and decrementing reference counts every time a reference is created or destroyed can significantly impede performance. Not only do the operations take time, but they damage cache performance and can lead to pipeline bubbles. Even read-only operations like calculating the length of a list require a large number of reads and writes for reference updates with naive reference counting.


One simple technique is for the compiler to combine a number of nearby reference updates into one. This is especially effective for references which are created and quickly destroyed. Care must be taken, however, to put the combined update at the right position so that a premature free can be avoided.


The Deutsch-Bobrow method of reference counting capitalizes on the fact that most reference count updates are in fact generated by references stored in local variables. It ignores these references, only counting references in data structures, but before an object with reference count zero can be deleted, the system must verify with a scan of the stack and registers that no other reference to it still exists.


Another technique devised by Henry Baker involves deferred increments,[4] in which references which are stored in local variables do not immediately increment the corresponding reference count, but instead defer this until it is necessary. If such a reference is destroyed quickly, then there is no need to update the counter. This eliminates a large number of updates associated with short-lived references (such as the above list-length-counting example). However, if such a reference is copied into a data structure, then the deferred increment must be performed at that time. It is also critical to perform the deferred increment before the object's count drops to zero, to avoid a premature free.


A dramatic decrease in the overhead on counter updates was obtained by Levanoni and Petrank.[5][6] They introduce the update coalescing method which coalesces many of the redundant reference count updates. Consider a pointer that in a given interval of the execution is updated several times. It first points to an object O1, then to an object O2, and so forth until at the end of the interval it points to some object On. A reference counting algorithm would typically execute rc(O1)--, rc(O2)++, rc(O2)--, rc(O3)++, rc(O3)--, ..., rc(On)++. But most of these updates are redundant. In order to have the reference count properly evaluated at the end of the interval it is enough to perform rc(O1)-- and rc(On)++. The rest of the updates are redundant.


Levanoni and Petrank showed in 2001 how to use such update coalescing in a reference counting collector. When using update coalescing with an appropriate treatment of new objects, more than 99% of the counter updates are eliminated for typical Java benchmarks.


Interestingly, update coalescing also eliminates the need to employ atomic operations during pointer updates in a concurrent setting, this solving reference counting issues in a concurrent setting. Therefore, update coalescing solves the third problem of naive reference counting (i.e., a costly overhead in a concurrent setting). Levanoni and Petrank presented an enhanced algorithm that may run concurrently with multithreaded applications employing only fine synchronization.[7]


Blackburn and McKinley's ulterior reference counting method in 2003[8] combines deferred reference counting with a copying nursery, observing that the majority of pointer mutations occur in young objects. This algorithm achieves throughput comparable with the fastest generational copying collectors with the low bounded pause times of reference counting.

Dealing with reference cycles[edit]

Perhaps the most obvious way to handle reference cycles is to design the system to avoid creating them. A system may explicitly forbid reference cycles; file systems with hard links often do this. Judicious use of "weak" (non-counted) references may also help avoid retain cycles; the Cocoa framework, for instance, recommends using "strong" references for parent-to-child relationships and "weak" references for child-to-parent relationships.[9]


Systems may also be designed to tolerate or correct the cycles they create in some way. Developers may design code to explicitly "tear down" the references in a data structure when it is no longer needed, though this has the cost of requiring them to manually track that data structure's lifetime. This technique can be automated by creating an "owner" object that does the tearing-down when it is destroyed; for instance, a Graph object's destructor could delete the edges of its GraphNodes, breaking the reference cycles in the graph. Cycles may even be ignored in systems with short lives and a small amount of cyclic garbage, particularly when the system was developed using a methodology of avoiding cyclic data structures wherever possible, typically at the expense of efficiency.


Computer scientists have also discovered ways to detect and collect reference cycles automatically, without requiring changes in the data structure design. One simple solution is to periodically use a tracing garbage collector to reclaim cycles; since cycles typically constitute a relatively small amount of reclaimed space, the collector can be run much less often than with an ordinary tracing garbage collector.


Bacon describes a cycle-collection algorithm for reference counting with similarities to tracing collectors, including the same theoretical time bounds. It is based on the observation that a cycle can only be isolated when a reference count is decremented to a nonzero value. All objects which this occurs on are put on a roots list, and then periodically the program searches through the objects reachable from the roots for cycles. It knows it has found a cycle that can be collected when decrementing all the reference counts on a cycle of references brings them all down to zero.[10] An enhanced version of this algorithm by Paz et al.[11] is able to run concurrently with other operations and improve its efficiency by using the update coalescing method of Levanoni and Petrank.[5][6]

Examples of use[edit]

Garbage collection[edit]

As a collection algorithm, reference counting tracks, for each object, a count of the number of references to it held by other objects. If an object's reference count reaches zero, the object has become inaccessible, and can be destroyed.


When an object is destroyed, any objects referenced by that object also have their reference counts decreased. Because of this, removing a single reference can potentially lead to a large number of objects being freed. A common modification allows reference counting to be made incremental: instead of destroying an object as soon as its reference count becomes zero, it is added to a list of unreferenced objects, and periodically (or as needed) one or more items from this list are destroyed.


Simple reference counts require frequent updates. Whenever a reference is destroyed or overwritten, the reference count of the object it references is decremented, and whenever one is created or copied, the reference count of the object it references is incremented.


Reference counting is also used in file systems and distributed systems, where full non-incremental tracing garbage collection is too time-consuming because of the size of the object graph and slow access speed.[14]

Component Object Model[edit]

Microsoft's Component Object Model (COM) and WinRT makes pervasive use of reference counting. In fact, two of the three methods that all COM objects must provide (in the IUnknown interface) increment or decrement the reference count. Much of the Windows Shell and many Windows applications (including MS Internet Explorer, MS Office, and countless third-party products) are built on COM, demonstrating the viability of reference counting in large-scale systems.


One primary motivation for reference counting in COM is to enable interoperability across different programming languages and runtime systems. A client need only know how to invoke object methods in order to manage object life cycle; thus, the client is completely abstracted from whatever memory allocator the implementation of the COM object uses. As a typical example, a Visual Basic program using a COM object is agnostic towards whether that object was allocated (and must later be deallocated) by a C++ allocator or another Visual Basic component.

C++[edit]

C++ does not perform reference-counting by default, fulfilling its philosophy of not adding functionality that might incur overheads where the user has not explicitly requested it. Objects that are shared but not owned can be accessed via a reference, raw pointer, or iterator (a conceptual generalisation of pointers).


However, by the same token, C++ provides native ways for users to opt-into such functionality: C++11 provides reference counted smart pointers, via the std::shared_ptr class, enabling automatic shared memory-management of dynamically allocated objects. Programmers can use this in conjunction with weak pointers (via std::weak_ptr) to break cyclic dependencies. Objects that are dynamically allocated but not intended to be shared can have their lifetime automatically managed using a std::unique_ptr.


In addition, C++11's move semantics further reduce the extent to which reference counts need to be modified by removing the deep copy normally used when a function returns an object, as it allows for a simple copy of the pointer of said object.

The Memory Manager Reference: Beginner's Guide: Recycling: Reference Counts

An On-the-Fly Reference-Counting Garbage Collector for Java, Yossi Levanoni and Erez Petrank

Atomic Reference Counting Pointers: A lock-free, async-free, thread-safe, multiprocessor-safe reference counting pointer, Kirk Reinholtz

Extending and Embedding the Python Interpreter: Extending Python with C or C++: Reference Counts, Guido van Rossum

Down for the count? Getting reference counting back in the ring, Rifat Shahriyar, Stephen M. Blackburn and Daniel Frampton