Open Access Publications
From research on the visual systems of turtles, to the perception of faces with or without makeup, to transaccadic perception and perceptual cycles in the brain– VPixx hardware and software solutions have supported research in vision science and beyond for over 20 years. We are immensely proud of the discoveries and accomplishments of our customers across the world.
On this page you will find a non-exhaustive list of peer-reviewed, open access publications citing VPixx tools dating back to 2003. Browse the list or use the tag filter to search for specific products. Note that we report the device used in the paper according to the authors; this may not accurately reflect the specific model of device used (e.g., VIEWPixx vs. VIEWPixx /3D). Nor do we guarantee the accuracy of published content. Please contact our team at [email protected] if you have any questions about a specific paper.
Curious about a specific application of our tools? Can’t find what you are looking for? Our staff scientists are happy to discuss paradigms and protocols using our equipment by email or video chat. Please contact us with your questions.
Want to have your work added to our library? Send us a message at [email protected] and we will add it. Your article must be peer-reviewed, open access, and it must indicate VPixx products were used in the research.
Use the search tool below to search for specific terms among the titles, authors and abstracts in our library.
Abuleil, Dania; Begum, Viquar Unnisa; Wei, Steven; Abuleil, Ammar; Thompson, Benjamin; McCulloch, Daphne L
Noise coherence thresholds for stereopsis Journal Article
In: Investigative Ophthalmology & Visual Science, vol. 57, no. 12, pp. 1500, 2016, ISSN: 1552-5783.
Abstract | BibTeX | Tags: 3DPixx, VIEWPixx3D
@article{abuleil_noise_2016,
title = {Noise coherence thresholds for stereopsis},
author = {Dania Abuleil and Viquar Unnisa Begum and Steven Wei and Ammar Abuleil and Benjamin Thompson and Daphne L McCulloch},
issn = {1552-5783},
year = {2016},
date = {2016-09-01},
journal = {Investigative Ophthalmology & Visual Science},
volume = {57},
number = {12},
pages = {1500},
abstract = {To evaluate typical adult coherence thresholds for global stereopsis using random dot stereograms of corrugated surfaces containing noise. Stimuli were random dot stereograms that carried disparity-defined sinusoidal gratings (0.77 cycles/ degree) presented at each of 5 disparity levels (1.2, 2.5, 3.7, 5.0 or 7.4 minutes of arc). Noise dots were positioned with random disparities within the depth defined by the corrugated surface of the stereograms to vary the coherence of the stimulus. A VPixx® 3D stimulus system equipped with Nvidia 3D goggles was used for dichoptic presentation. Coherence thresholds for coarse grating orientation discrimination (vertical vs. horizontal vs. oblique) were obtained using a psychometric staircase procedure. Ten adults with normal binocular vision participated. For the 3.7 disparity stimulus, thresholds were re-tested with neutral density filters (0.6, 1.2 and 1.8) in front of the non-dominant eye to simulate abnormal binocular function. The mean coherence required for disparity defined orientation discrimination was 43.0% (±11.6%). Coherence thresholds varied with stimulus disparity (ANOVA, p<0.001). Specifically, thresholds were significantly elevated for the smallest and largest disparity levels (p<0.001). Disrupting binocular vision using ND filters affected coherence thresholds (ANOVA, p<0.001). Thresholds increased with increasing filter density (35±10%, 44±18% and 61±14%, respectively). The coherence threshold for global form detection in random-dot stereograms with noise dots is a measure of stereopsis that is complementary to the conventional disparity threshold measure. Coherence stereo-thresholds are systematically affected by simulated disruption of binocularity and appear to reflect the quality of binocular vision. This is an abstract that was submitted for the 2016 ARVO Annual Meeting, held in Seattle, Wash., May 1-5, 2016.},
keywords = {3DPixx, VIEWPixx3D},
pubstate = {published},
tppubtype = {article}
}
White, Alex L.; Rolfs, Martin
Oculomotor inhibition covaries with conscious detection Journal Article
In: Journal of Neurophysiology, vol. 116, no. 3, pp. 1507–1521, 2016, ISSN: 0022-3077, (Publisher: American Physiological Society).
Abstract | Links | BibTeX | Tags: VIEWPixx3D
@article{white_oculomotor_2016,
title = {Oculomotor inhibition covaries with conscious detection},
author = {Alex L. White and Martin Rolfs},
url = {https://journals.physiology.org/doi/full/10.1152/jn.00268.2016},
doi = {10.1152/jn.00268.2016},
issn = {0022-3077},
year = {2016},
date = {2016-09-01},
urldate = {2024-01-16},
journal = {Journal of Neurophysiology},
volume = {116},
number = {3},
pages = {1507–1521},
abstract = {Saccadic eye movements occur frequently even during attempted fixation, but they halt momentarily when a new stimulus appears. Here, we demonstrate that this rapid, involuntary “oculomotor freezing” reflex is yoked to fluctuations in explicit visual perception. Human observers reported the presence or absence of a brief visual stimulus while we recorded microsaccades, small spontaneous eye movements. We found that microsaccades were reflexively inhibited if and only if the observer reported seeing the stimulus, even when none was present. By applying a novel Bayesian classification technique to patterns of microsaccades on individual trials, we were able to decode the reported state of perception more accurately than the state of the stimulus (present vs. absent). Moreover, explicit perceptual sensitivity and the oculomotor reflex were both susceptible to orientation-specific adaptation. The adaptation effects suggest that the freezing reflex is mediated by signals processed in the visual cortex before reaching oculomotor control centers rather than relying on a direct subcortical route, as some previous research has suggested. We conclude that the reflexive inhibition of microsaccades immediately and inadvertently reveals when the observer becomes aware of a change in the environment. By providing an objective measure of conscious perceptual detection that does not require explicit reports, this finding opens doors to clinical applications and further investigations of perceptual awareness.},
note = {Publisher: American Physiological Society},
keywords = {VIEWPixx3D},
pubstate = {published},
tppubtype = {article}
}
Meital-Kfir, Noya; Bonneh, Yoram S.; Sagi, Dov
Asymmetric visual interactions across the boundary of awareness Journal Article
In: Journal of Vision, vol. 16, no. 10, pp. 4, 2016, ISSN: 1534-7362.
Abstract | Links | BibTeX | Tags: VIEWPixx3D
@article{meital-kfir_asymmetric_2016,
title = {Asymmetric visual interactions across the boundary of awareness},
author = {Noya Meital-Kfir and Yoram S. Bonneh and Dov Sagi},
url = {https://doi.org/10.1167/16.10.4},
doi = {10.1167/16.10.4},
issn = {1534-7362},
year = {2016},
date = {2016-08-01},
urldate = {2023-12-21},
journal = {Journal of Vision},
volume = {16},
number = {10},
pages = {4},
abstract = {A salient visual object can disappear from conscious perception when surrounded by a moving texture, a phenomenon known as MIB, Motion-Induced Blindness (Bonneh, Cooperman, & Sagi, 2001). Here we tested the information available in the brain from such stimuli that do not access awareness by examining interactions across the boundary of awareness between stimuli that reach awareness and those that do not. Observers performed the MIB task in which a “Cue” was presented next to the “Target” after observers reported the perceptual disappearance of the target (Kawabe, Yamada, & Miura, 2007). Oriented Gabor patches were used as targets and cues; observers reported the target's reappearance. The results indicated an interaction between the target and the cue, depending on the orientation difference (∼30° bandwidth) and distance (∼1° range), indicating preserved properties of features in the absence of awareness. Object-based representation (binding) of unseen stimuli was tested by examining the interaction between a compound stimulus and its composing features. Here we used vertical and horizontal Gabor patches and their combinations (plaids) as targets and cues. Results indicated asymmetric relations between aware and unaware object representations; a plaid cue was not effective with a component target, but a plaid target efficiently reappeared by its component cues. This result suggests that the unseen, but not the seen plaid, is decomposed into its features. Plaid targets also reappeared with plaid cues, supporting binding without awareness. Our findings suggest preconscious representations of objects and their features, with conscious perception confined to object representations.},
keywords = {VIEWPixx3D},
pubstate = {published},
tppubtype = {article}
}
Coggan, David D.; Baker, Daniel H.; Andrews, Timothy J.
The Role of Visual and Semantic Properties in the Emergence of Category-Specific Patterns of Neural Response in the Human Brain Journal Article
In: eNeuro, vol. 3, no. 4, 2016, ISSN: 2373-2822, (Publisher: Society for Neuroscience Section: New Research).
Abstract | Links | BibTeX | Tags: VIEWPixx
@article{coggan_role_2016,
title = {The Role of Visual and Semantic Properties in the Emergence of Category-Specific Patterns of Neural Response in the Human Brain},
author = {David D. Coggan and Daniel H. Baker and Timothy J. Andrews},
url = {https://www.eneuro.org/content/3/4/ENEURO.0158-16.2016},
doi = {10.1523/ENEURO.0158-16.2016},
issn = {2373-2822},
year = {2016},
date = {2016-07-01},
urldate = {2024-01-18},
journal = {eNeuro},
volume = {3},
number = {4},
abstract = {Brain-imaging studies have found distinct spatial and temporal patterns of response to different object categories across the brain. However, the extent to which these categorical patterns of response reflect higher-level semantic or lower-level visual properties of the stimulus remains unclear. To address this question, we measured patterns of EEG response to intact and scrambled images in the human brain. Our rationale for using scrambled images is that they have many of the visual properties found in intact images, but do not convey any semantic information. Images from different object categories (bottle, face, house) were briefly presented (400 ms) in an event-related design. A multivariate pattern analysis revealed categorical patterns of response to intact images emerged ∼80–100 ms after stimulus onset and were still evident when the stimulus was no longer present (∼800 ms). Next, we measured the patterns of response to scrambled images. Categorical patterns of response to scrambled images also emerged ∼80–100 ms after stimulus onset. However, in contrast to the intact images, distinct patterns of response to scrambled images were mostly evident while the stimulus was present (∼400 ms). Moreover, scrambled images were able to account only for all the variance in the intact images at early stages of processing. This direct manipulation of visual and semantic content provides new insights into the temporal dynamics of object perception and the extent to which different stages of processing are dependent on lower-level or higher-level properties of the image.},
note = {Publisher: Society for Neuroscience
Section: New Research},
keywords = {VIEWPixx},
pubstate = {published},
tppubtype = {article}
}
Katz, Leor N.; Yates, Jacob L.; Pillow, Jonathan W.; Huk, Alexander C.
Dissociated functional significance of decision-related activity in the primate dorsal stream Journal Article
In: Nature, vol. 535, no. 7611, pp. 285–288, 2016, ISSN: 0028-0836.
Abstract | Links | BibTeX | Tags: DATAPixx
@article{katz_dissociated_2016,
title = {Dissociated functional significance of decision-related activity in the primate dorsal stream},
author = {Leor N. Katz and Jacob L. Yates and Jonathan W. Pillow and Alexander C. Huk},
url = {https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4966283/},
doi = {10.1038/nature18617},
issn = {0028-0836},
year = {2016},
date = {2016-07-01},
urldate = {2024-01-02},
journal = {Nature},
volume = {535},
number = {7611},
pages = {285–288},
abstract = {During decision-making, neurons in multiple brain regions exhibit
responses that are correlated with decisions-. However, it
remains uncertain whether or not various forms of decision-related activity are
causally related to decision-making-. Here we address
this question by recording and reversibly inactivating the lateral intraparietal
(LIP) and middle temporal (MT) areas of rhesus macaques performing a motion
direction discrimination task. Neurons in area LIP exhibited firing rate
patterns that directly resembled the evidence accumulation process posited to
govern decision making,, with strong correlations
between their response fluctuations and the animal's choices. Neurons in
area MT, in contrast, exhibited weak correlations between their response
fluctuations and animal choices, and had firing rate patterns consistent with
their sensory role in motion encoding. The behavioral impact of pharmacological inactivation
of each area was inversely related to their degree of decision-related activity:
while inactivation of neurons in MT profoundly impaired psychophysical
performance, inactivation in LIP had no measurable impact on decision-making
performance, despite having silenced the very clusters that exhibited strong
decision-related activity. Although LIP inactivation did not impair
psychophysical behavior, it did influence spatial selection and oculomotor
metrics in a free-choice control task. The absence of an effect on perceptual
decision-making was stable over trials and sessions, arguing against several
forms of compensation, and was robust to changes in stimulus type and task
geometry. Thus, decision-related signals in LIP do not appear to be necessary
for computing perceptual decisions. Our findings highlight a dissociation
between decision correlation and causation, showing that strong neuron-decision
correlations may reflect secondary or epiphenomenal signals, and do not
necessarily offer direct access to the neural computations underlying
decisions.},
keywords = {DATAPixx},
pubstate = {published},
tppubtype = {article}
}
responses that are correlated with decisions-. However, it
remains uncertain whether or not various forms of decision-related activity are
causally related to decision-making-. Here we address
this question by recording and reversibly inactivating the lateral intraparietal
(LIP) and middle temporal (MT) areas of rhesus macaques performing a motion
direction discrimination task. Neurons in area LIP exhibited firing rate
patterns that directly resembled the evidence accumulation process posited to
govern decision making,, with strong correlations
between their response fluctuations and the animal's choices. Neurons in
area MT, in contrast, exhibited weak correlations between their response
fluctuations and animal choices, and had firing rate patterns consistent with
their sensory role in motion encoding. The behavioral impact of pharmacological inactivation
of each area was inversely related to their degree of decision-related activity:
while inactivation of neurons in MT profoundly impaired psychophysical
performance, inactivation in LIP had no measurable impact on decision-making
performance, despite having silenced the very clusters that exhibited strong
decision-related activity. Although LIP inactivation did not impair
psychophysical behavior, it did influence spatial selection and oculomotor
metrics in a free-choice control task. The absence of an effect on perceptual
decision-making was stable over trials and sessions, arguing against several
forms of compensation, and was robust to changes in stimulus type and task
geometry. Thus, decision-related signals in LIP do not appear to be necessary
for computing perceptual decisions. Our findings highlight a dissociation
between decision correlation and causation, showing that strong neuron-decision
correlations may reflect secondary or epiphenomenal signals, and do not
necessarily offer direct access to the neural computations underlying
decisions.
Alais, David; Locke, Shannon M.; Leung, Johahn; der Burg, Erik Van
No attentional capture from invisible flicker Journal Article
In: Scientific Reports, vol. 6, no. 1, pp. 29296, 2016, ISSN: 2045-2322, (Number: 1 Publisher: Nature Publishing Group).
Abstract | Links | BibTeX | Tags: DATAPixx2, PROPixx
@article{alais_no_2016,
title = {No attentional capture from invisible flicker},
author = {David Alais and Shannon M. Locke and Johahn Leung and Erik Van der Burg},
url = {https://www.nature.com/articles/srep29296},
doi = {10.1038/srep29296},
issn = {2045-2322},
year = {2016},
date = {2016-07-01},
urldate = {2024-01-12},
journal = {Scientific Reports},
volume = {6},
number = {1},
pages = {29296},
abstract = {We tested whether fast flicker can capture attention using eight flicker frequencies from 20–96 Hz, including several too high to be perceived (>50 Hz). Using a 480 Hz visual display rate, we presented smoothly sampled sinusoidal temporal modulations at: 20, 30, 40, 48, 60, 69, 80 and 96 Hz. We first established flicker detection rates for each frequency. Performance was at or near ceiling until 48 Hz and dropped sharply to chance level at 60 Hz and above. We then presented the same flickering stimuli as pre-cues in a visual search task containing five elements. Flicker location varied randomly and was therefore congruent with target location on 20% of trials. Comparing congruent and incongruent trials revealed a very strong congruency effect (faster search for cued targets) for all detectable frequencies (20–48 Hz) but no effect for faster flicker rates that were detected at chance. This pattern of results (obtained with brief flicker cues: 58 ms) was replicated for long flicker cues (1000 ms) intended to allow for entrainment to the flicker frequency. These results indicate that only visible flicker serves as an exogenous attentional cue and that flicker rates too high to be perceived are completely ineffective.},
note = {Number: 1
Publisher: Nature Publishing Group},
keywords = {DATAPixx2, PROPixx},
pubstate = {published},
tppubtype = {article}
}
Chopin, Adrien; Levi, Dennis; Knill, David; Bavelier, Daphne
The absolute disparity anomaly and the mechanism of relative disparities Journal Article
In: Journal of Vision, vol. 16, no. 8, pp. 2, 2016, ISSN: 1534-7362.
Abstract | Links | BibTeX | Tags: VIEWPixx
@article{chopin_absolute_2016,
title = {The absolute disparity anomaly and the mechanism of relative disparities},
author = {Adrien Chopin and Dennis Levi and David Knill and Daphne Bavelier},
url = {https://doi.org/10.1167/16.8.2},
doi = {10.1167/16.8.2},
issn = {1534-7362},
year = {2016},
date = {2016-06-01},
urldate = {2024-01-18},
journal = {Journal of Vision},
volume = {16},
number = {8},
pages = {2},
abstract = {There has been a long-standing debate about the mechanisms underlying the perception of stereoscopic depth and the computation of the relative disparities that it relies on. Relative disparities between visual objects could be computed in two ways: (a) using the difference in the object's absolute disparities (Hypothesis 1) or (b) using relative disparities based on the differences in the monocular separations between objects (Hypothesis 2). To differentiate between these hypotheses, we measured stereoscopic discrimination thresholds for lines with different absolute and relative disparities. Participants were asked to judge the depth of two lines presented at the same distance from the fixation plane (absolute disparity) or the depth between two lines presented at different distances (relative disparity). We used a single stimulus method involving a unique memory component for both conditions, and no extraneous references were available. We also measured vergence noise using Nonius lines. Stereo thresholds were substantially worse for absolute disparities than for relative disparities, and the difference could not be explained by vergence noise. We attribute this difference to an absence of conscious readout of absolute disparities, termed the absolute disparity anomaly. We further show that the pattern of correlations between vergence noise and absolute and relative disparity acuities can be explained jointly by the existence of the absolute disparity anomaly and by the assumption that relative disparity information is computed from absolute disparities (Hypothesis 1).},
keywords = {VIEWPixx},
pubstate = {published},
tppubtype = {article}
}
Daar, Marwan; Wilson, Hugh R.
A closer look at four-dot masking of a foveated target Journal Article
In: PeerJ, vol. 4, pp. e2068, 2016, ISSN: 2167-8359.
Abstract | Links | BibTeX | Tags: VIEWPixx
@article{daar_closer_2016,
title = {A closer look at four-dot masking of a foveated target},
author = {Marwan Daar and Hugh R. Wilson},
url = {https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4893326/},
doi = {10.7717/peerj.2068},
issn = {2167-8359},
year = {2016},
date = {2016-06-01},
urldate = {2023-12-21},
journal = {PeerJ},
volume = {4},
pages = {e2068},
abstract = {Four-dot masking with a common onset mask was recently demonstrated in a fully attended and foveated target (, ). Here, we replicate and extend this finding by directly comparing a four-dot mask with an annulus mask while probing masking as a function of mask duration, and target-mask separation. Our results suggest that while an annulus mask operates via spatially local contour interactions, a four-dot mask operates through spatially global mechanisms. We also measure how the visual system’s representation of an oriented bar is impacted by a four-dot mask, and find that masking here does not degrade the precision of perceived targets, but instead appears to be driven exclusively by rendering the target completely invisible.},
keywords = {VIEWPixx},
pubstate = {published},
tppubtype = {article}
}
Mueller, Alexandra S.; González, Esther G.; McNorgan, Chris; Steinbach, Martin J.; Timney, Brian
Effects of Vertical Direction and Aperture Size on the Perception of Visual Acceleration Journal Article
In: Perception, vol. 45, no. 6, pp. 670–683, 2016, ISSN: 0301-0066, (Publisher: SAGE Publications Ltd STM).
Abstract | Links | BibTeX | Tags: DATAPixx, VPixxProgram
@article{mueller_effects_2016,
title = {Effects of Vertical Direction and Aperture Size on the Perception of Visual Acceleration},
author = {Alexandra S. Mueller and Esther G. González and Chris McNorgan and Martin J. Steinbach and Brian Timney},
url = {https://doi.org/10.1177/0301006616629034},
doi = {10.1177/0301006616629034},
issn = {0301-0066},
year = {2016},
date = {2016-06-01},
urldate = {2024-01-03},
journal = {Perception},
volume = {45},
number = {6},
pages = {670–683},
abstract = {It is not well understood whether the distance over which moving stimuli are visible affects our sensitivity to the presence of acceleration or our ability to track such stimuli. It is also uncertain whether our experience with gravity creates anisotropies in how we detect vertical acceleration and deceleration. To address these questions, we varied the vertical extent of the aperture through which we presented vertically accelerating and decelerating random dot arrays. We hypothesized that observers would better detect and pursue accelerating and decelerating stimuli that extend over larger than smaller distances. In Experiment 1, we tested the effects of vertical direction and aperture size on acceleration and deceleration detection accuracy. Results indicated that detection is better for downward motion and for large apertures, but there is no difference between vertical acceleration and deceleration detection. A control experiment revealed that our manipulation of vertical aperture size affects the ability to track vertical motion. Smooth pursuit is better (i.e., with higher peak velocities) for large apertures than for small apertures. Our findings suggest that the ability to detect vertical acceleration and deceleration varies as a function of the direction and vertical extent over which an observer can track the moving stimulus.},
note = {Publisher: SAGE Publications Ltd STM},
keywords = {DATAPixx, VPixxProgram},
pubstate = {published},
tppubtype = {article}
}
Riddell, Nina; Hugrass, Laila; Jayasuriya, Jude; Crewther, Sheila G.; Crewther, David P.
An asymmetric outer retinal response to drifting sawtooth gratings Journal Article
In: Journal of Neurophysiology, vol. 115, no. 5, pp. 2349–2358, 2016, ISSN: 0022-3077.
Abstract | Links | BibTeX | Tags: DATAPixx, VPixxProgram
@article{riddell_asymmetric_2016,
title = {An asymmetric outer retinal response to drifting sawtooth gratings},
author = {Nina Riddell and Laila Hugrass and Jude Jayasuriya and Sheila G. Crewther and David P. Crewther},
url = {https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5394652/},
doi = {10.1152/jn.00040.2016},
issn = {0022-3077},
year = {2016},
date = {2016-05-01},
urldate = {2023-12-21},
journal = {Journal of Neurophysiology},
volume = {115},
number = {5},
pages = {2349–2358},
abstract = {Electroretinogram (ERG) studies have demonstrated that the retinal response to temporally modulated fast-ON and fast-OFF sawtooth flicker is asymmetric. The response to spatiotemporal sawtooth stimuli has not yet been investigated. Perceptually, such drifting gratings or diamond plaids shaded in a sawtooth pattern appear brighter when movement produces fast-OFF relative to fast-ON luminance profiles. The neural origins of this illusion remain unclear (although a retinal basis has been suggested). Thus we presented toad eyecups with sequential epochs of sawtooth, sine-wave, and square-wave gratings drifting horizontally across the retina at temporal frequencies of 2.5–20 Hz. All ERGs revealed a sustained direct-current (DC) transtissue potential during drift and a peak at drift offset. The amplitudes of both phenomena increased with temporal frequency. Consistent with the human perceptual experience of sawtooth gratings, the sustained DC potential effect was greater for fast-OFF cf. fast-ON sawtooth. Modeling suggested that the dependence of temporal luminance contrast on stimulus device frame rate contributed to the temporal frequency effects but could not explain the divergence in response amplitudes for the two sawtooth profiles. The difference between fast-ON and fast-OFF sawtooth profiles also remained following pharmacological suppression of postreceptoral activity with tetrodotoxin (TTX), 2-amino-4-phosphonobutric acid (APB), and 2,3 cis-piperidine dicarboxylic acid (PDA). Our results indicate that the DC potential difference originates from asymmetries in the photoreceptoral response to fast-ON and fast-OFF sawtooth profiles, thus pointing to an outer retinal origin for the motion-induced drifting sawtooth brightness illusion.},
keywords = {DATAPixx, VPixxProgram},
pubstate = {published},
tppubtype = {article}
}
Orchard-Mills, Emily; der Burg, Erik Van; Alais, David
Crossmodal Correspondence Between Auditory Pitch and Visual Elevation Affects Temporal Ventriloquism Journal Article
In: Perception, vol. 45, no. 4, pp. 409–424, 2016, ISSN: 0301-0066, (Publisher: SAGE Publications Ltd STM).
Abstract | Links | BibTeX | Tags: DATAPixx
@article{orchard-mills_crossmodal_2016,
title = {Crossmodal Correspondence Between Auditory Pitch and Visual Elevation Affects Temporal Ventriloquism},
author = {Emily Orchard-Mills and Erik Van der Burg and David Alais},
url = {https://doi.org/10.1177/0301006615622320},
doi = {10.1177/0301006615622320},
issn = {0301-0066},
year = {2016},
date = {2016-04-01},
urldate = {2024-01-02},
journal = {Perception},
volume = {45},
number = {4},
pages = {409–424},
abstract = {Temporal ventriloquism is the shift in perceived timing of a visual stimulus that occurs when an auditory stimulus is presented close in time. This study investigated whether crossmodal correspondence between auditory pitch and visual elevation modulates temporal ventriloquism. Participants were presented two visual stimuli (above and below fixation) across a range of stimulus onset asynchronies and were asked to judge the order of the events. A task-irrelevant auditory click was presented shortly before the first and another shortly after the second visual stimulus. There were two pitches used (low and high) and the congruency between the auditory and visual stimuli was manipulated. The results show that incongruent pairings between pitch and elevation abolish temporal ventriloquism. In contrast, the crossmodal correspondence effect was absent when the direction of the pitch change was fixed within sessions, reducing the saliency of the pitch change. The results support previous studies suggesting that in addition to spatial and temporal factors, crossmodal correspondences can influence binding of information across the senses, although these effects are likely to be dependent on the saliency of the crossmodal mapping.},
note = {Publisher: SAGE Publications Ltd STM},
keywords = {DATAPixx},
pubstate = {published},
tppubtype = {article}
}
Zavitz, Elizabeth; Yu, Hsin-Hao; Rowe, Elise G.; Rosa, Marcello G. P.; Price, Nicholas S. C.
Rapid Adaptation Induces Persistent Biases in Population Codes for Visual Motion Journal Article
In: Journal of Neuroscience, vol. 36, no. 16, pp. 4579–4590, 2016, ISSN: 0270-6474, 1529-2401, (Publisher: Society for Neuroscience Section: Articles).
Abstract | Links | BibTeX | Tags: VIEWPixx3D
@article{zavitz_rapid_2016,
title = {Rapid Adaptation Induces Persistent Biases in Population Codes for Visual Motion},
author = {Elizabeth Zavitz and Hsin-Hao Yu and Elise G. Rowe and Marcello G. P. Rosa and Nicholas S. C. Price},
url = {https://www.jneurosci.org/content/36/16/4579},
doi = {10.1523/JNEUROSCI.4563-15.2016},
issn = {0270-6474, 1529-2401},
year = {2016},
date = {2016-04-01},
urldate = {2024-01-17},
journal = {Journal of Neuroscience},
volume = {36},
number = {16},
pages = {4579–4590},
abstract = {Each visual experience changes the neural response to subsequent stimuli. If the brain is unable to incorporate these encoding changes, the decoding, or perception, of subsequent stimuli is biased. Although the phenomenon of adaptation pervades the nervous system, its effects have been studied mainly in isolation, based on neuronal encoding changes induced by an isolated, prolonged stimulus. To understand how adaptation-induced biases arise and persist under continuous, naturalistic stimulation, we simultaneously recorded the responses of up to 61 neurons in the marmoset (Callithrix jacchus) middle temporal area to a sequence of directions that changed every 500 ms. We found that direction-specific adaptation following only 0.5 s of stimulation strongly affected encoding for up to 2 s by reducing both the gain and the spike count correlations between pairs of neurons with preferred directions close to the adapting direction. In addition, smaller changes in bandwidth and preferred direction were observed in some animals. Decoding individual trials of adaptation-affected activity in simultaneously recorded neurons predicted repulsive biases that are consistent with the direction aftereffect. Surprisingly, removing spike count correlations by trial shuffling did not impact decoding performance or bias. When adaptation had the largest effect on encoding, the decoder made the most errors. This suggests that neural and perceptual repulsion is not a mechanism to enhance perceptual performance but is instead a necessary consequence of optimizing neural encoding for the identification of a wide range of stimulus properties in diverse temporal contexts.
SIGNIFICANCE STATEMENT Although perception depends upon decoding the pattern of activity across a neuronal population, the encoding properties of individual neurons are unreliable: a single neuron's response to repetitions of the same stimulus is variable, and depends on both its spatial and temporal context. In this manuscript, we describe the complete cascade of adaptation-induced effects in sensory encoding and show how they predict population decoding errors consistent with perceptual biases. We measure the time course of adaptation-induced changes to the response properties of neurons in isolation, and to the correlation structure across pairs of simultaneously recorded neurons. These results provide novel insight into how and for how long adaptation affects the neural code, particularly during continuous, naturalistic vision.},
note = {Publisher: Society for Neuroscience
Section: Articles},
keywords = {VIEWPixx3D},
pubstate = {published},
tppubtype = {article}
}
SIGNIFICANCE STATEMENT Although perception depends upon decoding the pattern of activity across a neuronal population, the encoding properties of individual neurons are unreliable: a single neuron's response to repetitions of the same stimulus is variable, and depends on both its spatial and temporal context. In this manuscript, we describe the complete cascade of adaptation-induced effects in sensory encoding and show how they predict population decoding errors consistent with perceptual biases. We measure the time course of adaptation-induced changes to the response properties of neurons in isolation, and to the correlation structure across pairs of simultaneously recorded neurons. These results provide novel insight into how and for how long adaptation affects the neural code, particularly during continuous, naturalistic vision.
Wallis, Thomas S. A.; Bethge, Matthias; Wichmann, Felix A.
Testing models of peripheral encoding using metamerism in an oddity paradigm Journal Article
In: Journal of Vision, vol. 16, no. 2, pp. 4, 2016, ISSN: 1534-7362.
Abstract | Links | BibTeX | Tags: VIEWPixx
@article{wallis_testing_2016,
title = {Testing models of peripheral encoding using metamerism in an oddity paradigm},
author = {Thomas S. A. Wallis and Matthias Bethge and Felix A. Wichmann},
url = {https://doi.org/10.1167/16.2.4},
doi = {10.1167/16.2.4},
issn = {1534-7362},
year = {2016},
date = {2016-03-01},
urldate = {2024-01-18},
journal = {Journal of Vision},
volume = {16},
number = {2},
pages = {4},
abstract = {Most of the visual field is peripheral, and the periphery encodes visual input with less fidelity compared to the fovea. What information is encoded, and what is lost in the visual periphery? A systematic way to answer this question is to determine how sensitive the visual system is to different kinds of lossy image changes compared to the unmodified natural scene. If modified images are indiscriminable from the original scene, then the information discarded by the modification is not important for perception under the experimental conditions used. We measured the detectability of modifications of natural image structure using a temporal three-alternative oddity task, in which observers compared modified images to original natural scenes. We consider two lossy image transformations, Gaussian blur and Portilla and Simoncelli texture synthesis. Although our paradigm demonstrates metamerism (physically different images that appear the same) under some conditions, in general we find that humans can be capable of impressive sensitivity to deviations from natural appearance. The representations we examine here do not preserve all the information necessary to match the appearance of natural scenes in the periphery.},
keywords = {VIEWPixx},
pubstate = {published},
tppubtype = {article}
}
Geringswald, Franziska; Porracin, Eleonora; Pollmann, Stefan
Impairment of visual memory for objects in natural scenes by simulated central scotomata Journal Article
In: Journal of Vision, vol. 16, no. 2, pp. 6, 2016, ISSN: 1534-7362.
Abstract | Links | BibTeX | Tags: RESPONSEPixx
@article{geringswald_impairment_2016,
title = {Impairment of visual memory for objects in natural scenes by simulated central scotomata},
author = {Franziska Geringswald and Eleonora Porracin and Stefan Pollmann},
url = {https://doi.org/10.1167/16.2.6},
doi = {10.1167/16.2.6},
issn = {1534-7362},
year = {2016},
date = {2016-03-01},
urldate = {2024-01-08},
journal = {Journal of Vision},
volume = {16},
number = {2},
pages = {6},
abstract = {Because of the close link between foveal vision and the spatial deployment of attention, typically only objects that have been foveated during scene exploration may form detailed and persistent memory representations. In a recent study on patients suffering from age-related macular degeneration, however, we found surprisingly accurate visual long-term memory for objects in scenes. Normal exploration patterns that the patients had learned to rereference saccade targets to an extrafoveal retinal location. This rereferencing may allow use of an extrafoveal location as a focus of attention for efficient object encoding into long-term memory. Here, we tested this hypothesis in normal-sighted observers with gaze-contingent central scotoma simulations. As these observers were inexperienced in scene exploration with central vision loss and had not developed saccadic rereferencing, we expected deficits in long-term memory for objects. We used the same change detection task as in our patient study, probing sensitivity to object changes after a period of free scene exploration. Change detection performance was significantly reduced for two types of scotoma simulation diminishing foveal and parafoveal vision—a visible gray disc and a more subtle image warping—compared with unimpaired controls, confirming our hypothesis. The impact of a smaller scotoma covering specifically foveal vision was less distinct, leading to a marginally significant decrease of long-term memory performance compared with controls. We conclude that attentive encoding of objects is deficient when central vision is lost as long as successful saccadic rereferencing has not yet developed.},
keywords = {RESPONSEPixx},
pubstate = {published},
tppubtype = {article}
}
Lithari, Chrysa; Sánchez-García, Carolina; Ruhnau, Philipp; Weisz, Nathan
Large-scale network-level processes during entrainment Journal Article
In: Brain Research, vol. 1635, pp. 143–152, 2016, ISSN: 0006-8993.
Abstract | Links | BibTeX | Tags: PROPixx
@article{lithari_large-scale_2016,
title = {Large-scale network-level processes during entrainment},
author = {Chrysa Lithari and Carolina Sánchez-García and Philipp Ruhnau and Nathan Weisz},
url = {https://www.sciencedirect.com/science/article/pii/S0006899316300208},
doi = {10.1016/j.brainres.2016.01.043},
issn = {0006-8993},
year = {2016},
date = {2016-03-01},
urldate = {2024-01-08},
journal = {Brain Research},
volume = {1635},
pages = {143–152},
abstract = {Visual rhythmic stimulation evokes a robust power increase exactly at the stimulation frequency, the so-called steady-state response (SSR). Localization of visual SSRs normally shows a very focal modulation of power in visual cortex and led to the treatment and interpretation of SSRs as a local phenomenon. Given the brain network dynamics, we hypothesized that SSRs have additional large-scale effects on the brain functional network that can be revealed by means of graph theory. We used rhythmic visual stimulation at a range of frequencies (4–30Hz), recorded MEG and investigated source level connectivity across the whole brain. Using graph theoretical measures we observed a frequency-unspecific reduction of global density in the alpha band “disconnecting” visual cortex from the rest of the network. Also, a frequency-specific increase of connectivity between occipital cortex and precuneus was found at the stimulation frequency that exhibited the highest resonance (30Hz). In conclusion, we showed that SSRs dynamically re-organized the brain functional network. These large-scale effects should be taken into account not only when attempting to explain the nature of SSRs, but also when used in various experimental designs.},
keywords = {PROPixx},
pubstate = {published},
tppubtype = {article}
}
Wilson, Hugh R.; Fung, Jeffrey
Effect of motion discontinuities on discrimination of periodic trajectories Journal Article
In: Journal of Vision, vol. 16, no. 3, pp. 24, 2016, ISSN: 1534-7362.
Abstract | Links | BibTeX | Tags: VIEWPixx, VPixxProgram
@article{wilson_effect_2016,
title = {Effect of motion discontinuities on discrimination of periodic trajectories},
author = {Hugh R. Wilson and Jeffrey Fung},
url = {https://doi.org/10.1167/16.3.24},
doi = {10.1167/16.3.24},
issn = {1534-7362},
year = {2016},
date = {2016-02-01},
urldate = {2024-01-02},
journal = {Journal of Vision},
volume = {16},
number = {3},
pages = {24},
abstract = {Many biologically important motions are described by periodic trajectories. Radial frequency (RF) trajectories are one example, in which the motion of a difference of Gaussians (DOG) target moves along a path described by a sinusoidal deviation of the radius from a perfect circle (Or, Thabet, Wilkinson, & Wilson, 2011). Here we explore the hypothesis that visual processing of RF trajectories involves global spatio-temporal processes that are disrupted by motion discontinuity. To test this hypothesis, RF trajectories were used that interspersed smooth, continuous motion with three or four discontinuous jumps to other portions of the trajectory. These jumps were arranged so that the entire trajectory was traversed in the same amount of time as in the continuous motion control condition. The motion discontinuities increased thresholds by a factor of approximately 2.1 relative to continuous motion. This result provides support for global spatio-temporal processing of RF motion trajectories. Comparison with previous results suggests that motion discontinuities erase memory for earlier parts of the trajectory, thereby causing thresholds to be based on only the final segment viewed. Finally, it is shown that RF trajectories obey the 1/3 power law characteristic of biological motion.},
keywords = {VIEWPixx, VPixxProgram},
pubstate = {published},
tppubtype = {article}
}
Olkkonen, Maria; Saarela, Toni P.; Allred, Sarah R.
Perception-memory interactions reveal a computational strategy for perceptual constancy Journal Article
In: Journal of Vision, vol. 16, no. 3, pp. 38, 2016, ISSN: 1534-7362.
Abstract | Links | BibTeX | Tags: DATAPixx
@article{olkkonen_perception-memory_2016,
title = {Perception-memory interactions reveal a computational strategy for perceptual constancy},
author = {Maria Olkkonen and Toni P. Saarela and Sarah R. Allred},
url = {https://doi.org/10.1167/16.3.38},
doi = {10.1167/16.3.38},
issn = {1534-7362},
year = {2016},
date = {2016-02-01},
urldate = {2024-01-17},
journal = {Journal of Vision},
volume = {16},
number = {3},
pages = {38},
abstract = {A key challenge for the visual system is to extract constant object properties from incoming sensory information. This information is ambiguous because the same sensory signal can arise from many combinations of object properties and viewing conditions and noisy because of the variability in sensory encoding. The competing accounts for perceptual constancy of surface lightness fall into two classes of model: One derives lightness estimates from border contrasts, and another explicitly infers surface reflectance. To test these accounts, we combined a novel psychophysical task with probabilistic implementations of both models. Observers compared the lightness of two stimuli under a memory demand (a delay between the stimuli), a context change (different surround luminance), or both. Memory biased perceived lightness toward the mean of the whole stimulus ensemble. Context change caused the classical simultaneous lightness contrast effect, in which a target appears lighter against a dark surround and darker against a light surround. These effects were not independent: Combined memory load and context change elicited a bias smaller than predicted assuming an independent combination of biases. Both models explain the memory bias as an effect of prior expectations on perception. Both models also produce a context effect, but only the reflectance model correctly describes the magnitude. The reflectance model, finally, captures the memory-context interaction better than the contrast model, both qualitatively and quantitatively. We conclude that (a) lightness perception is more consistent with reflectance inference than contrast coding and (b) adding a memory demand to a perceptual task both renders it more ecologically valid and helps adjudicate between competing models.},
keywords = {DATAPixx},
pubstate = {published},
tppubtype = {article}
}
Allard, Rémy; Faubert, Jocelyn
The Role of Feature Tracking in the Furrow Illusion Journal Article
In: Frontiers in Human Neuroscience, vol. 10, 2016, ISSN: 1662-5161.
Abstract | Links | BibTeX | Tags: VIEWPixx
@article{allard_role_2016,
title = {The Role of Feature Tracking in the Furrow Illusion},
author = {Rémy Allard and Jocelyn Faubert},
url = {https://www.frontiersin.org/articles/10.3389/fnhum.2016.00081},
issn = {1662-5161},
year = {2016},
date = {2016-01-01},
urldate = {2024-01-18},
journal = {Frontiers in Human Neuroscience},
volume = {10},
abstract = {In the furrow illusion (Anstis, 2012), the perceived path of a moving target follows the veridical path orientation when viewed foveally, but follows the orientation of the texture when viewed peripherally. These radically different motion percepts depending on whether the stimulus is viewed foveally or peripherally has led Anstis to conclude that the furrow illusion reveals “profound differences in the way that the periphery and fovea process visual motion.” In the current study, we rather argue that the different percepts can be explained by reduced position acuity with eccentricity and therefore do not imply different ways of processing motion per se. If feature tracking, which is position-based, is involved in the perception of the veridical motion direction, then impairing the feature tracking motion system should strengthen the illusion. To reduce contribution of the feature tracking motion system, we used a crowding paradigm consisting in presenting many nearby targets. We found that under crowding conditions, the furrow illusion was stronger. We conclude that feature tracking was involved in the perception of the veridical motion direction, which is compatible with the hypothesis that the different motion percepts at fixation and in the periphery are due to a reduced position acuity with eccentricity affecting feature tracking, not to different ways of processing motion per se.},
keywords = {VIEWPixx},
pubstate = {published},
tppubtype = {article}
}
Bays, Brett C.; Turk-Browne, Nicholas B.; Seitz, Aaron R.
Dissociable behavioural outcomes of visual statistical learning Journal Article
In: Visual cognition, vol. 23, no. 9-10, pp. 1072–1097, 2016, ISSN: 1350-6285.
Abstract | Links | BibTeX | Tags: DATAPixx, RESPONSEPixx
@article{bays_dissociable_2016,
title = {Dissociable behavioural outcomes of visual statistical learning},
author = {Brett C. Bays and Nicholas B. Turk-Browne and Aaron R. Seitz},
url = {https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4963038/},
doi = {10.1080/13506285.2016.1139647},
issn = {1350-6285},
year = {2016},
date = {2016-01-01},
urldate = {2024-01-02},
journal = {Visual cognition},
volume = {23},
number = {9-10},
pages = {1072–1097},
abstract = {Statistical learning refers to the extraction of probabilistic relationships between stimuli and is increasingly used as a method to understand learning processes. However, numerous cognitive processes are sensitive to the statistical relationships between stimuli and any one measure of learning may conflate these processes; to date little research has focused on differentiating these processes. To understand how multiple processes underlie statistical learning, here we compared, within the same study, operational measures of learning from different tasks that may be differentially sensitive to these processes. In Experiment 1, participants were visually exposed to temporal regularities embedded in a stream of shapes. Their task was to periodically detect whether a shape, whose contrast was staircased to a threshold level, was present or absent. Afterwards, they completed a search task, where statistically predictable shapes were found more quickly. We used the search task to label shape pairs as “learned” or “non-learned”, and then used these labels to analyse the detection task. We found a dissociation between learning on the search task and the detection task where only non-learned pairs showed learning effects in the detection task. This finding was replicated in further experiments with recognition memory (Experiment 2) and associative learning tasks (Experiment 3). Taken together, these findings are consistent with the view that statistical learning may comprise a family of processes that can produce dissociable effects on different aspects of behaviour.},
keywords = {DATAPixx, RESPONSEPixx},
pubstate = {published},
tppubtype = {article}
}
Tagai, Keiko; Ohtaka, Hitomi; Nittono, Hiroshi
Faces with Light Makeup Are Better Recognized than Faces with Heavy Makeup Journal Article
In: Frontiers in Psychology, vol. 7, 2016, ISSN: 1664-1078.
Abstract | Links | BibTeX | Tags: VIEWPixx
@article{tagai_faces_2016,
title = {Faces with Light Makeup Are Better Recognized than Faces with Heavy Makeup},
author = {Keiko Tagai and Hitomi Ohtaka and Hiroshi Nittono},
url = {https://www.frontiersin.org/articles/10.3389/fpsyg.2016.00226},
issn = {1664-1078},
year = {2016},
date = {2016-01-01},
urldate = {2024-01-03},
journal = {Frontiers in Psychology},
volume = {7},
abstract = {Many women wear facial makeup to accentuate their appeal and attractiveness. Makeup may vary from natural (light) to glamorous (heavy), depending of the context of interpersonal situations, an emphasis on femininity, and current societal makeup trends. This study examined how light makeup and heavy makeup influenced attractiveness ratings and facial recognition. In a rating task, 38 Japanese women assigned attractiveness ratings to 36 Japanese female faces with no makeup, light makeup, and heavy makeup (12 each). In a subsequent recognition task, the participants were presented with 36 old and 36 new faces. Results indicated that attractiveness was rated highest for the light makeup faces and lowest for the no makeup faces. In contrast, recognition performance was higher for the no makeup and light make up faces than for the heavy makeup faces. Faces with heavy makeup produced a higher rate of false recognition than did other faces, possibly because heavy makeup creates an impression of the style of makeup itself, rather than the individual wearing the makeup. The present study suggests that light makeup is preferable to heavy makeup in that light makeup does not interfere with individual recognition and gives beholders positive impressions.},
keywords = {VIEWPixx},
pubstate = {published},
tppubtype = {article}
}