Manual Waldo. Book 9

Free download. Book file PDF easily for everyone and every device. You can download and read online Waldo. Book 9 file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Waldo. Book 9 book. Happy reading Waldo. Book 9 Bookeveryone. Download file Free Book PDF Waldo. Book 9 at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Waldo. Book 9 Pocket Guide.

Abusive comment hidden. Show it anyway. Where's Maddie? Maybe if the Aztecs did more human sacrifice it would have prevented their "civilization" from being "destroyed". You know that the Aztecs shown are sacrificing Conquistadors, it's only fitting after the Spanish basically destroyed their civilization. We hope you like what you see! The book deals with losing your virginity, homosexuality and birth control, leading to its banishment from many US school libraries. Even two decades later, it was the ninth most banned book in the States during the s. The banning of Forever led a generation of young adults to miss out on an important avenue for learning about the birds and the bees.

The contentious themes of sex, teenagers and race frequently feature in her work. Point made. Numerous schools across America refused to teach it. One regular at an Ohio library actually defecated on the offending book, which seems a little over-the-top if you ask us. Luckily, this sweet story of a child being raised by two women in a loving relationship has enjoyed great popularity ever since. The book is seen as something of a ground-breaker in helping to move rural and reactionary middle America into the 21st Century.

List compiled by FlipSnack. For more information go to: www. Reinforcement learning pairs activations of the emerging invariant object category with a sequence of external reinforcing inputs Figure 12G. It hereby converts the active invariant object category into a conditioned reinforcer and source of incentive motivation by strengthening associative links from the category to the value category, and from the value category to the object-value category, respectively.

After the reset occurs, another simulated scene with the cellphone in position 8 as in Figures 12F,D2 is fed into model to repeat the learning processes. As explained above, the initial eye fixation is located at the center of the scene, so the cellphone generates an extra-foveal view to the What stream where a view-specific category neuron in region 8 gets activated Figure 12E2 , which activates the corresponding view category integrator neuron Figure 12F2 , which persists and learns to be associated with a new invariant object category neuron Figure 12H , red curve and the subsequent categorical layers.

After a saccadic eye movement is generated to bring the cellphone into the foveal region region 13 , the active shroud of the cellphone in region 8 enables eye movement explorations to occur on the cellphone surface and thus generate a sequence of foveal views that initiate new view-specific category learning and view integrator activations Figures 12E4,F4.

However, as noted in section 2, how the eyes choose the next saccadic target is not random. The features that are selected in the simulated scene of cellphone at region 7 are thus chosen again when learning the cellphone located in the region 8. That is, at least one previously learned view-specific category neuron is activated in turn activates the corresponding view category integrator. This integrator learned to be associated with the previously learned invariant object category.

As the result, the extra-foveal views of the cellphone regions 7 and 8 are linked to the same invariant object category, thereby developing its positionally-invariant property. Before the cellphone is shifted into the foveal region by a saccadic eye movement, a view from the retinal periphery is generated and activates the view-specific category neuron in region 9 Figure 12E3 and the corresponding view category integrator neuron Figure 12F3 that activates a new invariant object category neuron Figure 12H , green curve.

By the same process that was explained above, the view category integrator neuron can learn to be associated with the previously learned -invariant object category that is activated by a view category integrator neuron after a feature on the cellphone surface is repeatedly selected Figure 12D3. The same processes take place for objects appearing at other extra-foveal positions.

Figure 11B shows the development of model responses across learning trials, with and without reinforcement learning. The model requires approximately 30—40 trials before the associative weights become asymptotically stable. Category learning without reinforcement learning eliminates the ITa-AMYG-ORB resonances by setting the weights from invariant object categories to value categories to zero. As a result, responses of the value category remain zero Figure 11B2 , open circles , and responses of the invariant category Figure 11B1 , object-value category Figure 11B3 , and name category Figure 11B4 show smaller increments compared to those during reinforcement learning trials.

To carry out the reinforcement learning trials, it was assumed that the 24 objects that were conditioned were associated with one of three value categories. For definiteness although this has no effect on the simulations , each value category was associated with 8 of the 24 objects. When the first object was associated with its value category, there was no effect of other objects because their initial conditioned reinforcer and incentive motivational weights were chosen equal to zero. Consider learning trials with the second object that is associated with a given value category.

When the value category gets activated, it can send incentive motivational signals to the object-value category of the first object to be conditioned. However, as shown in Equation A75 , these conditioned signals are modulatory. Since the first object is not present, its invariant object category is inactive, and thus its object-value category does not receive an input from the object category. As a result, the object-value category of the first object remains inactive. This is also true for all objects that were associated with a given value category when a different object is presented.

Top-down search tasks are based on the view- and positionally-invariant object category learning of 24 objects, described in section 8. The top-down primed search can be triggered either via a name category neuron in PFC by receiving a priming name input Figures 6A,B or via a value category in AMYG by receiving sufficiently large internal motivational drive signal Figures 6C,D. Either way, the corresponding object-value category in ORB can be activated and projects to the invariant object category in ITa.

The amplified invariant object category top-down primes multiple learned view-specific category neurons in ITp through view category integrator neurons. During the primed search processes, the object-value categories, the invariant object categories, and view category integrators receive volition control signals from the BG to ensure the top-down prime to be appropriately activated. Bottom-up inputs from the objects in the viewed scene also activate the view-specific category neurons in ITp.

The view-specific category with the best combination of top-down prime and bottom-up input will be mostly highly activated. This enables a winner-take all choice of the primed view-specific category, using the choice mechanism that was summarized in section 7. The selected view-specific category can induce eye movements toward the target object either via a direct or an indirect pathway. In response to realistic scenes, many factors may reduce performance accuracy, including distractors, internal noise, speed-accuracy tradeoffs, imperfections of figure-ground separation, and the like.

Another important factor that can limit search accuracy in the brain is the cortical magnification factor. As noted in section 9. The high peripheral acuity is due to the fact that, for simplicity, these models do not incorporate the cortical magnification factor, which would cause object representations that are processed from extra-foveal positions to have coarse sensory representations.

If several objects in a scene are featurally similar, their peripheral representations could then be associated with more than one similar object in foveal view, and thus would not unambiguously predict a definite object category. Rather, they may only predict a coarser and more abstract category. However, once these objects are foveated, they benefit from the higher resolution of foveal processing.

Figure 13A is an exemplar of a search scene in which the cellphone object is denoted as Waldo. Search is based on positionally-and view-invariant object category learning of 24 objects, as illustrated in A. In B , a cognitive primed search are illustrated. A In the indirect route, the amplified view-specific category selectively primes the target boundary to make it stronger than other object boundaries in the search scene. The kernels have four orientations and three scales. This shroud draws spatial attention to the primed cellphone object.

B Cognitive primed search. The category representations in a top-down cognitive primed search are consistent with the interactions in Figures 6A,B. The bars represent category activities at the time when the view-specific category is selectively amplified through the matching process.

Only the cellphone category receives a cognitive priming signal. The value category remains at rest because no reinforcement signals are received. The object-value category corresponding to the cellphone is primed by the cellphone name category. A volitional signal also reaches the invariant object category and view category integrator stages to enable them to also fire in response to their top-down primes, as now discussed: 4 Invariant object category. The cellphone invariant object category fires in response to its object-value category and volitional inputs. The view category integrators corresponding to the cellphone also fire in response to their invariant object category and volitional inputs.

Colored bars in each position index activations corresponding to the different objects. The view-specific category at position 9 receives a top-down priming input from its view category integrator and a bottom-up input from the cellphone stimulus. It is thereby selectively amplified. The category representations during a motivational drive search are consistent with the interactions in Figures 6C,D. The value category that was associated with the cellphone receives an internal motivational priming input that activates a motivational signal to the object-value category which, supplemented by a volitional signal, amplifies the corresponding invariant object category through an inferotemporal-amygdala-orbitofrontal resonance.

The various results are analogous to those in Figure 13B. In the simulation of a cognitively primed search that is summarized in Figure 13B , the name category neuron corresponding to the cellphone receives a priming signal Figure 13B1 and then projects to the object-value category. The active object-value category Figure 13B3 continually excites the corresponding invariant object category Figure 13B4.

To show the effect of a purely cognitive prime, it is assumed that the value categories are not active. In the simulation, this happens because the value categories do not receive any internal drive inputs, and thus their activities remain at the rest level Figure 13B2.

The active invariant object category, supplemented by volitional signals, top-down primes all the view- and positionally-specific categories through the view category integrator neurons. The view category integrators corresponding to different positions receive both top-down primes from the invariant object categories and volitional signals from the BG. As a result, all the view- and positionally-specific categories that were associated with cellphone object category get amplified Figure 13B5.

The view-specific category with the matched position from the bottom-up Waldo input gets the most activation Figure 13B6 ; that is, the category that encodes the extra-foveal view of cellphone at the 9th position.

9 Strange Things Found While Searching for Waldo

To distinguish the effect of motivational drive search from the cognitive primed search, the connections from the object-value categories to name categories are eliminated so that the name category neurons stay at their rest level Figure 14A. As in the top-down cognitive primed search, the enhanced invariant object category Figure 14D top-down primes all the view category integrators Figure 14E and, in turn, its view-specific category.

This prime can now amplify the most active view-specific category, which corresponds to the extra-foveal cellphone view at the 9th position, Figure 14F. The selected view-specific category neuron in ITp induces an eye movement to the Waldo target through either a direct or an indirect route. The direct route from the view-specific category layer to the eye movement map via a learned adaptive weight can more quickly elicit a saccadic eye movement.


  • WHERE'S WALDO KALAMAZOO KICK OFF | Bookbug.
  • Building for Life: Designing and Understanding the Human-Nature Connection;
  • I Think Therefore I Play.

The learning between a view-specific category and the eye movement map occurs during positionally-invariant category learning when a non-foveal object learns to activate its view-specific category and generates an eye movement command to move the eyes to its position. However, along the indirect route, the selected view-specific category neuron selectively primes its target boundary representation Figure 13A3 which gates the surface filling-in process to increase the contrast of the selected target surface Figure 13A4.

Spatial attention corresponding to the target surface competitively wins to form an attentional shroud through a surface-shroud resonance Figure 13A5. As a result, the surface contour Figure 13A6 of the attended surface gets strengthened, leading to selection of its hot spots as eye movement targets.

For example, the cellphone object in Figure 13A is set as a Waldo target and is simulated under different search pathways via either the direct or indirect route until Waldo is foveated. The bottom-up search pathway has longer search reaction times compared to the top-down cognitive primed and the motivational drive pathways. In addition, the reaction time in the direct pathway is always shorter than in the indirect pathway because the indirect pathway has more stage interactions to compute the saccadic eye movement.

The search reaction times of the direct route in each search mechanism are similar because the eye movement is activated via the learned pathway from the selected view-specific category and the interactions between categorical layers are the same, whereas the search reaction times in the indirect route are different for different targets due to the different surface contour strength of the various objects.

Search reaction times under different search conditions. The search reaction times are statistically computed in the eye movement map via bottom-up, cognitive primed, and motivational drive search mechanisms through a direct and an indirect route. Blue bars correspond to the direct route and red bars indicate the indirect route.

See the text for further discussion. The indirect path reaction times between and ms are comparable to, say, the reaction times in the Brown and Denney experiments on spatial attention shifts, which are quantitatively simulated in Foley et al. The model introduces several major additional improvements and innovations. First, incorporating positionally-invariant object category learning is necessary to perform the different search tasks, which all show how object attention in the What stream can activate spatial attention in the Where stream.

Hours and location:

The model thus incorporates multiple bi-directional connections between two cortical streams: from the Where stream to the What stream to perform both view- and positionally-sensitive and view- and positionally-invariant category learning, and from the What stream to the Where stream to perform either bottom-up or top-down primed searches. Second, volitional signals from the BG are needed to convert top-down priming signals into suprathreshold activations during search tasks. Third, during category learning in the What stream, cognitive-emotional resonances can strengthen object category, value category, object-value category, and name representations to enable valued objects to preferentially compete for object attention during search tasks.

Fourth, all these processes, taken together, can support performance of bottom-up or top-down cognitive or motivational, direct or indirect pathway, Waldo searches. During the top-down searches, a primed object name, or distinctive motivational source in the What stream can interact with the Where stream to direct spatial attention and eye movements to the position of the object. A large number of visual search experiments and models consider top-down priming, and how it may interact with parallel visual representations of target features Wolfe et al.

Feature dimensions, such as color, intensity, shape, size, orientation, etc.

Find Waldo ! | Gibson's Bookstore

Attention can be shifted to an object or a location through a combination of bottom-up and top-down processing. The Guided Search Wolfe, and Saliency Map models Itti and Koch, rely on spatial competition to select the most salient feature. Observers detect whether a single feature object was present or not during visual search experiments; there was no need to identify the target. These models thus do not include object-based attention or any of the other concepts and mechanisms that are needed to learn object categories and object-based searches, and cannot explain the corresponding data bases.

The ARTSCAN Search model, in contrast, provides a detailed description of how spatial and object attention, invariant object category learning, predictive remapping, eye movement search, and conscious visual perception and recognition are intimately linked. In particular, the surface-shroud resonance that is predicted to correspond to paying focal spatial attention to an object and to regulate invariant object learning and eye movement search, has also been predicted to be the event that triggers conscious perception of visual qualia Foley et al.

Other models have focused on object recognition, rather than visual search per se. Riesenhuber and Poggio proposed a hierarchical model called HMAX to illustrate how view-invariant object recognition occurs. The view-invariant units at the higher stages are achieved by pooling together the appropriate view-tuned units for each object.

Instead, ARTSCAN Search includes both bottom-up and top-down interactions, as well as recurrent interactions at multiple processing stages, to carry out its attentional, learned categorization, and search properties. In particular, in HMAX there is no spatial or object attention, or coordination of the What and Where cortical streams to learn invariant object categories and to drive object searches.

Moreover, ARTSCAN Search incorporates ART dynamics to learn view-specific object categories that can be chosen from a dense, non-stationary input environment, without a loss of learning speed or stability Carpenter and Grossberg, , ; Carpenter et al. Feedforward categorization models fall apart under such learning conditions Grossberg, Kanan and Cottrell have developed a model to classify objects, faces, and flowers using natural image statistics.

Their preprocessing tries to emulate luminance adaptation within individual phororeceptors. To do this, they compute the logarithm of each pixel intensity and then normalize the result. The logarithm compresses the dynamic range of the image, but has unbounded limiting values at high and low arguments, so cannot be the correct form factor for biological preprocessing. ARTSCAN Search does not try to model individual photoreceptors, although its front end can be augmented by detailed models of vertebrate photoreceptor adaptation. These models show how an intracellular shift property and Weber law can be achieved using habituative transmitter gates that normalize photoreceptor response and quantitatively fit photoreceptor psychophysical and neurophysiological data Carpenter and Grossberg, ; Grossberg and Hong, Instead, ARTSCAN Search embodies the next stages of visual brain adaptation using a shunting on-center off-surround network that computes a regional contrast normalization which also exhibits the shift and Weber law properties e.

See Equation A6. Kanan and Cottrell then use principal component analysis PCA to learn filters that play the role of simple cells. They discard the largest principal component, and then select d of the remaining components by optimizing performance on an external dataset. These useful, but computationally non-local, computer vision operations do not seem to have biological homologs. ARTSCAN Search does not learn its simple and complex cell filters [see Equations 9—17 ], but these filters are similar to the oriented filters that self-organize in response to image statistics in biological self-organizing map models of cortical development e.

Kanan and Cottrell compute a saliency map from their filters using a number of other non-local operations, and their fixations are chosen randomly. In contrast, in ARTSCAN Search, the salient features that are computed from the surface contours of the attended surface generate predictive eye movement commands to fixate the positions of these salient features, until the surface-shroud resonance collapses, and enables another surface to be attended and searched [see Equations 43—49 ].

Random fixations do not allow the autonomous learning of invariant object categories, and do not occur in vivo Theeuwes et al. Kanan and Cottrell apply PCA to the collected feature vectors, and the components with the largest eigenvalues are selected and normalized. This information is combined by assuming fixations are statistically independent.

After T fixations, the class with the greatest posterior is assigned. Grossberg et al. For example, after seeing a stove and a sink, one expects to see a refrigerator more than a beach. Due to the coarse resolution of peripheral vision, high-acuity object recognition requires a combination of selective attention and successive eye movements that bring the objects of interest into foveal vision Liversedge and Findlay, In contrast, Thorpe et al. Human subjects are asked to respond if a natural image contains an animal. The results showed that, even in the absence of foveating eye movements, visual information initiating in the retinal periphery can be processed to make superordinate categorizations, such as deciding whether or not an animal is contained in the scene.

However, the subjects failed to identify the animals that they detected in the image. To identify a tiger as a tiger rather than as an animal , objects require a more detailed analysis by foveally-mediated perceptual and categorization processes. If several objects in a scene are featurally similar, they can be associated with multiple similar objects in foveal view, and thus do not unambiguously predict a definite object category.

The current model does not simulate the cortical magnification factor, for simplicity, since its focus is on higher-level processes. View-invariant category learning has, however, been demonstrated using log-polar preprocessing to represent the cortical magnification factor and Fuzzy ARTMAP as the view-specific category classifier Bradski and Grossberg, ; Fazl et al. These results show that including the cortical magnification factor can be successfully incorporated in a future version of the model. Top-down processes occur in both cortical streams.

For the Where cortical stream, it has been suggested that top-down attention can guide target selections by facilitating information processing of stimuli at an attended location Wolfe, ; Hyle et al. Such top-down modulation can enhance the effective contrast of an attended stimulus Carrasco et al. Fazl et al.

Let's Play Where's Waldo? The Fantastic Journey - #9 The Knights of the Magic Flag

The ARTSCAN Search model extends this insight to the learning of view- and positionally-invariant object categories and the capacity to carry out bottom-up and top-down searches. For the What cortical stream, Bar proposed that low spatial frequencies in the image rapidly project to PFC through magnocellular pathways. PFC can then project back to inferotemporal cortex and to amygdala through orbitofrontal cortex.

In particular, activity in the orbitofrontal cortex is involved in producing of expectations that facilitate object recognition Bechara et al. A third and related mechanism drives a top-down primed search process using knowledge about the learned objects e. Bar also emphasized a top-down mechanism for facilitation of object recognition from prefrontal region to the IT area via expectancies from the orbitofrontal cortex.

The present model carries out all of its computations in Cartesian coordinates. Future versions of the model that wish to include the compression and other representational properties of space-variant processing can preprocess the input images using the cortical magnification factor Schwartz, ; Seibert and Waxman, ; Basu and Licardie, ; Bradski and Grossberg, , using the foundation that is summarized in the section 9. The present model simulates 2D images composed of non-overlapping natural objects. Future model extensions need to incorporate mechanisms for processing 2D images and 3D scenes with overlapping objects to show how partially occluded objects can be separated from their occluders and completed in a way that facilitates their recognition.

These mechanisms can extend the current model to carry out searches of scenes with partially occluded objects. Spatial attention may be distributed between several objects at a time, and a scene does not go dark around a focally attended object Eriksen and Yeh, ; Downing, ; Pylyshyn and Storm, ; Yantis, ; McMains and Somers, Foley et al.


  • Mastering Resin.
  • Find Waldo 2018!;
  • Related Articles.
  • Debts: A Novella.

This extension enables many more data to be simulated, including data about two-object cueing, useful-field-of-view, and crowding. The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The model is a network of point neurons with single compartment membrane voltage, V t , that obeys:. Grossberg, , a , b. The three E terms represent reversal potentials. At equilibrium, the equation can be written as:. Thus, increases in the excitatory and inhibitory conductance lead to depolarization and hyperpolarization of the membrane potential, respectively.

All conductances contribute to divisive normalization of the membrane potential, as shown as the denominator in Equation A2. Equation A2 can be re-written as:. Superscript letters signify the pre-synaptic cell and the postsynaptic cell, respectively. For example, the weight from the neuron with activity X i to the neuron with activity Y j is denoted by W XY ij. Model parameters were chosen to illustrate how attentional shrouds may be sequentially activated when a simulated scene contains multiple objects, in particular how object surface with highest contrast activities can competitively form the winning shroud while inhibiting other possible shrouds in the spatial attention map.

The variables in the mathematical equations that represent the model brain regions. In the superscript notation for the weights W, the first letter represents the presynaptic population and the second letter the postsynaptic population. Due to the focus on the high-level interactions of the cortical What and Where streams in the model, we simplify the front-end image processing of the model.

The ON-cells on-center off-surround have small excitatory center and broader inhibitory surround receptive fields, whereas the receptive fields of the OFF-cells off-center on-surround have the converse relation to the ON-cells. Multiple scales small, medium, large input to the boundary and surface representations that are used to drive spatial attention, category learning, and search. I pq is the image input at position p , q , D cg pqij , and D sg pqij are, respectively, the Gaussian on-center and off-surround receptive fields:. The oriented simple cells in primary visual cortical area V1 receive bottom-up activated LGN ON and OFF cell activities which are sampled as oriented differences at each image location.

The simple cell, Y g ijk , of orientation k and scale g obeys:. The outputs from model simple cells include both ON-cells and OFF-cells which respond to opposite contrast polarities before being half-wave rectified:. The activities of polarity-insensitive complex cells, z g ij , are determined by summing the half-wave rectified outputs of polarity-sensitive cells at the same position i , j :.

The output signals of the complex cells, Z g ij , are normalized by divisive normalization Grossberg, , b at each position:. Divisive normalization helps to suppress stimuli that are presented outside of the receptive fields of neurons and sharpen the Z g ij boundaries around an object Grossberg and Mingolla, ; Heeger, ; Schwartz and Simoncelli, Since the ARTSCAN Search model focuses on higher-level interactions between the What and Where cortical streams that process non-overlapping natural images with complete boundaries, several image preprocessing stages are simplified or omitted, such as interactions between cortical layers in V1 and V2 that contribute to boundary completion and figure-ground separation in response to 2D images and 3D scenes.

The object boundary activities B g ij are computed using small, medium, and large receptive fields, or scales, g that receive multiple-scale bottom-up inputs from the complex cells Z g ij. This enhancement helps to drive indirect searches for a Waldo object that codes this category. In all, the object boundary activities B g ij at position i , j and scale g have the equilibrium value:. In Equation A19 , the signal function m is defined by the sigmoid function:.

Boundary position q is defined by a small region of the input scene into which an exemplar of an object can occur. The large-scale boundary Equation A19 in each region can drive view-specific category learning of the object [see Equations A55—A60 ] and, as shown in Equation A19 , can receive learned top-down modulatory inputs from the corresponding learned view-specific category neurons. The spread of LGN-activated surface activities is gated, or inhibited, by boundary signals. The LGN inputs are also modulated by top-down attentional inputs from whatever surface-shroud resonances are active.

These attentional inputs increase the contrasts of the filled-in surface activities, and thus the surface contours of the attended surface, leading to preferential choice of eye movements on that surface. The attentional inputs are mediated by gain fields that convert the head-centered shroud back to retinotopic coordinates.

The surface neurons also receive inhibitory inputs from reset neurons in the Where stream that facilitate instatement of the next surface to be attended after a spatial attentional shift. The boundary-gated diffusion coefficient, P pqij , that regulates the magnitude of activity spread between position i , j and position p , q obeys:. After ON and OFF filling-in processes occur, the outputs from different scales are pooled to form a multiple-scale output signal Hong and Grossberg, :.

This weighted distribution of scales, with the largest weight given to the large scale to produce a more homogeneous surface representation, is used in the competition for spatial attention to choose a winning shroud. The filled-in ON and OFF surface activities across multiple scales of the attended object surface are averaged before being contrast-enhanced by on-center and off-center networks, half-wave rectified, and added to generate surface contour output signals C ij at position i , j :. Surface contours strengthen the boundaries that formed them and inhibit spurious boundaries, as in Equations A18, A When a surface-shroud resonance is active, it enhances the activation of the attended surface via gain field neurons, as in Equations A22, A In addition to selecting and strengthening the boundaries that formed them, surface contours are also processed in a parallel pathway that controls the target positions of eye movements that scan the attended object, as in Equation A The role of surface contours in target selection is possible because surface contours occur at positions where surface brightnesses and colors change quickly across space, and thus mark positions where salient features exist on the surface.

When surface contours signals are strengthened by spatial attention, they can compete more effectively in the eye movement map Equation A43 to determine the positions to which the eyes will move, therefore restricting scanning eye movements to the attended surface while its shroud is active. These averages give greater weight to the small scale because it computes better localized, signals around the salient features of the filled-in surface:. Model processes prior to the spatial attentional map are all in retinotopic coordinates, so that object positions change with every eye movement.

In contrast, the spatial attention map is in head-centered coordinates that are invariant to changes in eye position. Gain fields mediate this transformation Andersen and Mountcastle, ; Andersen et al. However, the implementation of gain filed transformation in the ARTSCAN model increases the computational loads when input image becomes large. To overcome this problem, ARTSCAN Search modifies the gain field model of Cassanello and Ferrera , which computes the visual remapping using a product of maps instead of a linear combination.

The bottom-up channel receives bottom-up retinotopic surface inputs which are shifted according to the eye position to the head-centric map, whereas the top-down channel transforms the top-down head-centric map to a retinotopic map, again modulated by eye position. When both retinal and eye position maps are two-dimensional, the gain field will be four-dimensional. In the bottom-up channel, the activity I U mnij of gain field cell at position m , n , k , l is the product of the eye position map with the sum of the object surface map and the spatial attentional map:.

The output signals A I mn from the gain field to the spatial attentional map are defined as the sum of all the gain field maps corresponding to different eye positions:.

Search form

In the top-down channel, as in the bottom-up channel, the activity I D mnij of the gain field cell at position m , n , k , l is the product of the eye position map at position k , l with the sum of the shifted spatial attention map and the eye position map:.