General considerations on coordination[edit]
Summary of problem[edit]
Attention is crucial in determining which phenomena appear to be bound together, noticed, and remembered.[3] This specific binding problem is generally referred to as temporal synchrony. At the most basic level, all neural firing and its adaptation depends on specific consideration to timing (Feldman, 2010). At a much larger level, frequent patterns in large scale neural activity are a major diagnostic and scientific tool.[4]
Synchronization theory and research[edit]
A popular hypothesis mentioned by Peter Milner, in his 1974 article A Model for Visual Shape Recognition, has been that features of individual objects are bound/segregated via synchronization of the activity of different neurons in the cortex.[5][6] The theory, called binding-by-synchrony (BBS), is hypothesized to occur through the transient mutual synchronization of neurons located in different regions of the brain when the stimulus is presented.[7] Empirical testing of the idea was brought to light when von der Malsburg proposed that feature binding posed a special problem that could not be covered simply by cellular firing rates.[8] However, it has been shown this theory may not be a problem since it was revealed that the modules code jointly for multiple features, countering the feature-binding issue.[9] Temporal synchrony has been shown to be the most prevalent when regarding the first problem, "General Considerations on Coordination," because it is an effective method to take in surroundings and is good for grouping and segmentation. A number of studies suggested that there is indeed a relationship between rhythmic synchronous firing and feature binding. This rhythmic firing appears to be linked to intrinsic oscillations in neuronal somatic potentials, typically in the gamma range around 40 – 60 hertz.[10] The positive arguments for a role for rhythmic synchrony in resolving the segregational object-feature binding problem have been summarized by Singer.[11] There is certainly extensive evidence for synchronization of neural firing as part of responses to visual stimuli.
However, there is inconsistency between findings from different laboratories. Moreover, a number of recent reviewers, including Shadlen and Movshon[6] and Merker[12] have raised concerns about the theory being potentially untenable. Thiele and Stoner found that perceptual binding of two moving patterns had no effect on synchronization of the neurons responding to two patterns: coherent and noncoherent plaids.[13] In the primary visual cortex, Dong et al. found that whether two neurons were responding to contours of the same shape or different shapes had no effect on neural synchrony since synchrony is independent of binding condition.
Shadlen and Movshon[6] raise a series of doubts about both the theoretical and the empirical basis for the idea of segregational binding by temporal synchrony. There is no biophysical evidence that cortical neurons are selective to synchronous input at this point of precision and cortical activity with synchrony this precise is rare. Synchronization is also connected to endorphin activity. It has been shown that precise spike timing may not be necessary to illustrate a mechanism for visual binding and is only prevalent in modeling certain neuronal interactions. In contrast, Seth[14] describes an artificial brain-based robot that demonstrates multiple, separate, widely distributed neural circuits, firing at different phases,showing that regular brain oscillations at specific frequencies are essential to the neural mechanisms of binding.
Goldfarb and Treisman[15] point out that a logical problem appears to arise for binding solely via synchrony if there are several objects that share some of their features and not others. At best synchrony can facilitate segregation supported by other means (as von der Malsburg acknowledges).[16]
A number of neuropsychological studies suggest that the association of color, shape and movement as "features of an object" is not simply a matter of linking or "binding", but shown to be inefficient to not bind elements into groups when considering association[17] and give extensive evidence for top-down feedback signals that ensure that sensory data are handled as features of (sometimes wrongly) postulated objects early in processing. Pylyshyn[18] has also emphasized the way the brain seems to pre-conceive objects from which features are to be allocated to which are attributed continuing existence even if features such as color change. This is because visual integration increases over time and indexing visual objects helps to ground visual concepts.
Feature integration theory[edit]
Summary of problem[edit]
The visual feature binding problem refers to the question of why we do not confuse a red circle and a blue square with a blue circle and a red square. The understanding of the circuits in the brain stimulated for visual feature binding is increasing. A binding process is required for us to accurately encode various visual features in separate cortical areas.
In her feature integration theory, Treisman suggested that one of the first stages of binding between features is mediated by the features' links to a common location. The second stage is combining individual features of an object that requires attention, and selecting that object occurs within a "master map" of locations. Psychophysical demonstrations of binding failures under conditions of full attention provide support for the idea that binding is accomplished through common location tags.[19]
An implication of these approaches is that sensory data such as color or motion may not normally exist in "unallocated" form. For Merker:[20] "The 'red' of a red ball does not float disembodied in an abstract color space in V4." If color information allocated to a point in the visual field is converted directly, via the instantiation of some form of propositional logic (analogous to that used in computer design) into color information allocated to an "object identity" postulated by a top-down signal as suggested by Purves and Lotto (e.g. There is blue here + Object 1 is here = Object 1 is blue) no special computational task of "binding together" by means such as synchrony may exist. (Although Von der Malsburg[21] poses the problem in terms of binding "propositions" such as "triangle" and "top", these, in isolation, are not propositional.)
How signals in the brain come to have propositional content, or meaning, is a much larger issue. However, both Marr[22] and Barlow[23] suggested, on the basis of what was known about neural connectivity in the 1970s that the final integration of features into a percept would be expected to resemble the way words operate in sentences.
The role of synchrony in segregational binding remains controversial. Merker[20] has recently suggested that synchrony may be a feature of areas of activation in the brain that relates to an "infrastructural" feature of the computational system analogous to increased oxygen demand indicated via BOLD signal contrast imaging. Apparent specific correlations with segregational tasks may be explainable on the basis of interconnectivity of the areas involved. As a possible manifestation of a need to balance excitation and inhibition over time it might be expected to be associated with reciprocal re-entrant circuits as in the model of Seth et al.[14] (Merker gives the analogy of the whistle from an audio amplifier receiving its own output.)
Experimental work[edit]
Visual feature binding is suggested to have a selective attention to the locations of the objects. If indeed spatial attention does play a role in binding integration it will do so primarily when object location acts as a binding cue. A study's findings has shown that functional MRI images indicate regions of the parietal cortex involved in spatial attention, engaged in feature conjunction tasks in single feature tasks. The task involved multiple objects being shown simultaneously at different locations which activated the parietal cortex. Whereas when multiple objects are shown sequentially at the same location the parietal cortex was less engaged.[24]
Consciousness and binding[edit]
Summary of problem[edit]
Smythies[27] defines the combination problem, also known as the subjective unity of perception, as "How do the brain mechanisms actually construct the phenomenal object?". Revonsuo[1] equates this to "consciousness-related binding", emphasizing the entailment of a phenomenal aspect. As Revonsuo explores in 2006,[28] there are nuances of difference beyond the basic BP1:BP2 division. Smythies speaks of constructing a phenomenal object ("local unity" for Revonsuo) but philosophers such as René Descartes, Gottfried Wilhelm Leibniz, Immanuel Kant, and James (see Brook and Raymont)[29] have typically been concerned with the broader unity of a phenomenal experience ("global unity" for Revonsuo) – which, as Bayne[30] illustrates may involve features as diverse as seeing a book, hearing a tune and feeling an emotion. Further discussion will focus on this more general problem of how sensory data that may have been segregated into, for instance, "blue square" and "yellow circle" are to be re-combined into a single phenomenal experience of a blue square next to a yellow circle, plus all other features of their context. There is a wide range of views on just how real this "unity" is, but the existence of medical conditions in which it appears to be subjectively impaired, or at least restricted, suggests that it is not entirely illusory.[31]
There are many neurobiological theories about the subjective unity of perception. Different visual features such as color, size, shape, and motion are computed by largely distinct neural circuits but we experience this as an integrated whole. The different visual features interact with each other in various ways. For example, shape discrimination of objects is strongly affected by orientation but only slightly affected by object size.[32] Some theories suggest that global perception of the integrated whole involves higher order visual areas.[33] There is also evidence that the posterior parietal cortex is responsible for perceptual scene segmentation and organization.[34] Bodies facing each other are processed as a single unit and there is increased coupling of the extrastriate body area (EBA) and the posterior superior temporal sulcus (pSTS) when bodies are facing each other.[35] This suggests that the brain is biased towards grouping humans in twos or dyads.[36]
History[edit]
Early philosophers René Descartes and Gottfried Wilhelm Leibniz[37] noted that the apparent unity of our experience is an all-or-none qualitative characteristic that does not appear to have an equivalent in the known quantitative features, like proximity or cohesion, of composite matter. William James[38] in the nineteenth century, considered the ways the unity of consciousness might be explained by known physics and found no satisfactory answer. He coined the term "combination problem", in the specific context of a "mind-dust theory" in which it is proposed that a full human conscious experience is built up from proto- or micro-experiences in the way that matter is built up from atoms. James claimed that such a theory was incoherent, since no causal physical account could be given of how distributed proto-experiences would "combine". He favoured instead a concept of "co-consciousness" in which there is one "experience of A, B and C" rather than combined experiences. A detailed discussion of subsequent philosophical positions is given by Brook and Raymont (see 26). However, these do not generally include physical interpretations.
Whitehead[39] proposed a fundamental ontological basis for a relation consistent with James's idea of co-consciousness, in which many causal elements are co-available or "compresent" in a single event or "occasion" that constitutes a unified experience. Whitehead did not give physical specifics but the idea of compresence is framed in terms of causal convergence in a local interaction consistent with physics. Where Whitehead goes beyond anything formally recognized in physics is in the "chunking" of causal relations into complex but discrete "occasions". Even if such occasions can be defined, Whitehead's approach still leaves James's difficulty with finding a site, or sites, of causal convergence that would make neurobiological sense for "co-consciousness". Sites of signal convergence do clearly exist throughout the brain but there is a concern to avoid re-inventing what Daniel Dennett[40] calls a Cartesian Theater or a single central site of convergence of the form that Descartes proposed.
Descartes's central "soul" is now rejected because neural activity closely correlated with conscious perception is widely distributed throughout the cortex. The remaining choices appear to be either separate involvement of multiple distributed causally convergent events or a model that does not tie a phenomenal experience to any specific local physical event but rather to some overall "functional" capacity. Whichever interpretation is taken, as Revonsuo[1] indicates, there is no consensus on what structural level we are dealing with – whether the cellular level, that of cellular groups as "nodes", "complexes" or "assemblies" or that of widely distributed networks. There is probably only general agreement that it is not the level of the whole brain, since there is evidence that signals in certain primary sensory areas, such as the V1 region of the visual cortex (in addition to motor areas and cerebellum), do not contribute directly to phenomenal experience.
Cognitive science and binding[edit]
In modern connectionism cognitive neuroarchitectures are developed (e.g. "Oscillatory Networks",[64] "Integrated Connectionist/Symbolic (ICS) Cognitive Architecture",[65] "Holographic Reduced Representations (HRRs)",[66] "Neural Engineering Framework (NEF)"[67]) that solve the binding problem by means of integrative synchronization mechanisms (e.g. the (phase-)synchronized "Binding-by-synchrony (BBS)" mechanism) (1) in perceptual cognition ("low-level cognition"): This is the neurocognitive performance of how an object or event that is perceived (e.g., a visual object) is dynamically "bound together" from its properties (e.g., shape, contour, texture, color, direction of motion) as a mental representation, i.e., can be experienced in the mind as a unified "Gestalt" in terms of Gestalt psychology ("feature binding", feature linking"), (2) and in language cognition ("high-level cognition"): This is the neurocognitive performance of how a linguistic unit (e.g. a sentence) is generated by relating semantic concepts and syntactic roles to each other in a dynamic way so that one can generate systematic and compositional symbol structures and propositions that are experienced as complex mental representations in the mind ("variable binding").[68][69][70][71]
[edit]
According to Igor Val Danilov,[72] knowledge about neurophysiological processes during Shared intentionality can reveal insights into the binding problem and even the perception of object development since intentionality succeeds before organisms confront the binding problem. Indeed, at the beginning of life, the environment is the cacophony of stimuli: electromagnetic waves, chemical interactions, and pressure fluctuations. Because the environment is uncategorised for the organisms at this beginning stage of development, the sensation is too limited by the noise to solve the cue problem–relevant stimulus cannot overcome the noise magnitude if it passes through the senses. While very young organisms need to combine objects, background and abstract or emotional features into a single experience for building the representation of the surrounded reality, they cannot distinguish relevant sensory stimuli independently to integrate them into object representations. Even the embodied dynamical system approach cannot get around the cue to noise problem. The application of embodied information requires an already categorised environment onto objects–holistic representation of reality–which occurs through (and only after the emergence of) perception and intentionality.[73][74]