Open Access Publications
From research on the visual systems of turtles, to the perception of faces with or without makeup, to transaccadic perception and perceptual cycles in the brain– VPixx hardware and software solutions have supported research in vision science and beyond for over 20 years. We are immensely proud of the discoveries and accomplishments of our customers across the world.
On this page you will find a non-exhaustive list of peer-reviewed, open access publications citing VPixx tools dating back to 2003. Browse the list or use the tag filter to search for specific products. Note that we report the device used in the paper according to the authors; this may not accurately reflect the specific model of device used (e.g., VIEWPixx vs. VIEWPixx /3D). Nor do we guarantee the accuracy of published content. Please contact our team at [email protected] if you have any questions about a specific paper.
Curious about a specific application of our tools? Can’t find what you are looking for? Our staff scientists are happy to discuss paradigms and protocols using our equipment by email or video chat. Please contact us with your questions.
Want to have your work added to our library? Send us a message at [email protected] and we will add it. Your article must be peer-reviewed, open access, and it must indicate VPixx products were used in the research.
Use the search tool below to search for specific terms among the titles, authors and abstracts in our library.
Neumann, Antonia; Leube, Alexander; Nabawi, Najma; Sauer, Yannick; Essig, Peter; Breher, Katharina; Wahl, Siegfried
Short-Term Peripheral Contrast Reduction Affects Central Chromatic and Achromatic Contrast Sensitivity Journal Article
In: Photonics, vol. 9, no. 3, pp. 123, 2022, ISSN: 2304-6732, (Number: 3 Publisher: Multidisciplinary Digital Publishing Institute).
Abstract | Links | BibTeX | Tags: VIEWPixx
@article{neumann_short-term_2022,
title = {Short-Term Peripheral Contrast Reduction Affects Central Chromatic and Achromatic Contrast Sensitivity},
author = {Antonia Neumann and Alexander Leube and Najma Nabawi and Yannick Sauer and Peter Essig and Katharina Breher and Siegfried Wahl},
url = {https://www.mdpi.com/2304-6732/9/3/123},
doi = {10.3390/photonics9030123},
issn = {2304-6732},
year = {2022},
date = {2022-03-01},
urldate = {2023-12-21},
journal = {Photonics},
volume = {9},
number = {3},
pages = {123},
abstract = {Peripheral retinal contrast reduction is suggested as a potential myopia control strategy. However, the underlying mechanism is yet unknown. Therefore, this study investigated the influence of peripheral contrast reduction on central chromatic and achromatic contrast sensitivity (CS). A total of 19 participants were included. Peripheral contrast reduction was induced via Bangerter foils of 0.4 and 0.8 density, each with a clear central zone of 8.0 mm diameter. Central achromatic and chromatic (for S-, M-, and L-cone types) CS was measured at 3 and 12 cpd in a 2-IFC psychophysical procedure. CS was tested monocularly at 0, 30, and 90 min of adaptation time, while the fellow eye was covered by an infrared filter. With the filter in place, pupil size was controlled to be smaller than the clear central aperture. Data were analyzed using linear mixed models. Cone-type CS showed significant differences among each other (all p < 0.05), except for the achromatic and L-cone type (p = 0.87). The minimum sensitivity was found with the S-cone type and the maximum with the M-cone type. Central achromatic and chromatic CS were equally affected by diffusion. The level of peripheral diffusion also influenced CS, while the 0.8 Bangerter foil led to a higher reduction in CS compared to the 0.4 Bangerter foil (p = 0.0008) and the control condition (p = 0.05). A significant reduction in CS occurred between 30 and 90 min of adaptation time (p < 0.0001). The current study found that peripheral contrast reduction impacted central achromatic and chromatic CS equally. It further showed that the amplitude of reduction was influenced by the level of diffusion, with the reduction becoming more pronounced over time.},
note = {Number: 3
Publisher: Multidisciplinary Digital Publishing Institute},
keywords = {VIEWPixx},
pubstate = {published},
tppubtype = {article}
}
Hauswald, Anne; Keitel, Anne; Chen, Ya-Ping; Rösch, Sebastian; Weisz, Nathan
Degradation levels of continuous speech affect neural speech tracking and alpha power differently Journal Article
In: European Journal of Neuroscience, vol. 55, no. 11-12, pp. 3288–3302, 2022, ISSN: 1460-9568, (_eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/ejn.14912).
Abstract | Links | BibTeX | Tags: RESPONSEPixxMRI, SOUNDPixx, VPixxProgram
@article{hauswald_degradation_2022,
title = {Degradation levels of continuous speech affect neural speech tracking and alpha power differently},
author = {Anne Hauswald and Anne Keitel and Ya-Ping Chen and Sebastian Rösch and Nathan Weisz},
url = {https://onlinelibrary.wiley.com/doi/abs/10.1111/ejn.14912},
doi = {10.1111/ejn.14912},
issn = {1460-9568},
year = {2022},
date = {2022-01-01},
urldate = {2023-12-21},
journal = {European Journal of Neuroscience},
volume = {55},
number = {11-12},
pages = {3288–3302},
abstract = {Making sense of a poor auditory signal can pose a challenge. Previous attempts to quantify speech intelligibility in neural terms have usually focused on one of two measures, namely low-frequency speech-brain synchronization or alpha power modulations. However, reports have been mixed concerning the modulation of these measures, an issue aggravated by the fact that they have normally been studied separately. We present two MEG studies analyzing both measures. In study 1, participants listened to unimodal auditory speech with three different levels of degradation (original, 7-channel and 3-channel vocoding). Intelligibility declined with declining clarity, but speech was still intelligible to some extent even for the lowest clarity level (3-channel vocoding). Low-frequency (1–7 Hz) speech tracking suggested a U-shaped relationship with strongest effects for the medium-degraded speech (7-channel) in bilateral auditory and left frontal regions. To follow up on this finding, we implemented three additional vocoding levels (5-channel, 2-channel and 1-channel) in a second MEG study. Using this wider range of degradation, the speech-brain synchronization showed a similar pattern as in study 1, but further showed that when speech becomes unintelligible, synchronization declines again. The relationship differed for alpha power, which continued to decrease across vocoding levels reaching a floor effect for 5-channel vocoding. Predicting subjective intelligibility based on models either combining both measures or each measure alone showed superiority of the combined model. Our findings underline that speech tracking and alpha power are modified differently by the degree of degradation of continuous speech but together contribute to the subjective speech understanding.},
note = {_eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/ejn.14912},
keywords = {RESPONSEPixxMRI, SOUNDPixx, VPixxProgram},
pubstate = {published},
tppubtype = {article}
}
Petras, Kirsten; Oever, Sanne; Dalal, Sarang S.; Goffaux, Valerie
Information redundancy across spatial scales modulates early visual cortical processing Journal Article
In: NeuroImage, vol. 244, pp. 118613, 2021, ISSN: 1053-8119.
Abstract | Links | BibTeX | Tags: PROPixx
@article{petras_information_2021,
title = {Information redundancy across spatial scales modulates early visual cortical processing},
author = {Kirsten Petras and Sanne Oever and Sarang S. Dalal and Valerie Goffaux},
url = {https://www.sciencedirect.com/science/article/pii/S1053811921008867},
doi = {10.1016/j.neuroimage.2021.118613},
issn = {1053-8119},
year = {2021},
date = {2021-12-01},
urldate = {2024-01-03},
journal = {NeuroImage},
volume = {244},
pages = {118613},
abstract = {Visual images contain redundant information across spatial scales where low spatial frequency contrast is informative towards the location and likely content of high spatial frequency detail. Previous research suggests that the visual system makes use of those redundancies to facilitate efficient processing. In this framework, a fast, initial analysis of low-spatial frequency (LSF) information guides the slower and later processing of high spatial frequency (HSF) detail. Here, we used multivariate classification as well as time-frequency analysis of MEG responses to the viewing of intact and phase scrambled images of human faces to demonstrate that the availability of redundant LSF information, as found in broadband intact images, correlates with a reduction in HSF representational dominance in both early and higher-level visual areas as well as a reduction of gamma-band power in early visual cortex. Our results indicate that the cross spatial frequency information redundancy that can be found in all natural images might be a driving factor in the efficient integration of fine image details.},
keywords = {PROPixx},
pubstate = {published},
tppubtype = {article}
}
Westner, Britta U.; Lubell, James I.; Jensen, Mads; Hokland, Sigbjørn; Dalal, Sarang S.
Contactless measurements of retinal activity using optically pumped magnetometers Journal Article
In: NeuroImage, vol. 243, pp. 118528, 2021, ISSN: 1053-8119.
Abstract | Links | BibTeX | Tags:
@article{westner_contactless_2021,
title = {Contactless measurements of retinal activity using optically pumped magnetometers},
author = {Britta U. Westner and James I. Lubell and Mads Jensen and Sigbjørn Hokland and Sarang S. Dalal},
url = {https://www.sciencedirect.com/science/article/pii/S1053811921008016},
doi = {10.1016/j.neuroimage.2021.118528},
issn = {1053-8119},
year = {2021},
date = {2021-11-01},
urldate = {2024-01-03},
journal = {NeuroImage},
volume = {243},
pages = {118528},
abstract = {Optically pumped magnetometers (OPMs) have been adopted for the measurement of brain activity. Without the need to be cooled to cryogenic temperatures, an array of these sensors can be placed more flexibly, which allows for the recording of neuronal structures other than neocortex. Here we use eight OPM sensors to record human retinal activity following flash stimulation. We compare this magnetoretinographic (MRG) activity to the simultaneously recorded electroretinogram of the eight participants. The MRG shows the familiar flash-evoked potentials (a-wave and b-wave) and shares a highly significant amount of information with the electroretinogram (both in a simultaneous and separate measurement). We conclude that OPM sensors have the potential to become a contactless alternative to fiber electrodes for the measurement of retinal activity. Such a contactless solution can benefit both clinical and neuroscientific settings.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Pan, Yali; Frisson, Steven; Jensen, Ole
Neural evidence for lexical parafoveal processing Journal Article
In: Nature Communications, vol. 12, no. 1, pp. 5234, 2021, ISSN: 2041-1723, (Number: 1 Publisher: Nature Publishing Group).
Abstract | Links | BibTeX | Tags: PROPixx
@article{pan_neural_2021,
title = {Neural evidence for lexical parafoveal processing},
author = {Yali Pan and Steven Frisson and Ole Jensen},
url = {https://www.nature.com/articles/s41467-021-25571-x},
doi = {10.1038/s41467-021-25571-x},
issn = {2041-1723},
year = {2021},
date = {2021-09-01},
urldate = {2024-01-31},
journal = {Nature Communications},
volume = {12},
number = {1},
pages = {5234},
abstract = {In spite of the reduced visual acuity, parafoveal information plays an important role in natural reading. However, competing models on reading disagree on whether words are previewed parafoveally at the lexical level. We find neural evidence for lexical parafoveal processing by combining a rapid invisible frequency tagging (RIFT) approach with magnetoencephalography (MEG) and eye-tracking. In a silent reading task, target words are tagged (flickered) subliminally at 60 Hz. The tagging responses measured when fixating on the pre-target word reflect parafoveal processing of the target word. We observe stronger tagging responses during pre-target fixations when followed by low compared with high lexical frequency targets. Moreover, this lexical parafoveal processing is associated with individual reading speed. Our findings suggest that reading unfolds in the fovea and parafovea simultaneously to support fluent reading.},
note = {Number: 1
Publisher: Nature Publishing Group},
keywords = {PROPixx},
pubstate = {published},
tppubtype = {article}
}
Duecker, Katharina; Gutteling, Tjerk P.; Herrmann, Christoph S.; Jensen, Ole
No Evidence for Entrainment: Endogenous Gamma Oscillations and Rhythmic Flicker Responses Coexist in Visual Cortex Journal Article
In: The Journal of Neuroscience, vol. 41, no. 31, pp. 6684–6698, 2021, ISSN: 0270-6474, 1529-2401.
Links | BibTeX | Tags: PROPixx
@article{duecker_no_2021,
title = {No Evidence for Entrainment: Endogenous Gamma Oscillations and Rhythmic Flicker Responses Coexist in Visual Cortex},
author = {Katharina Duecker and Tjerk P. Gutteling and Christoph S. Herrmann and Ole Jensen},
url = {https://www.jneurosci.org/lookup/doi/10.1523/JNEUROSCI.3134-20.2021},
doi = {10.1523/JNEUROSCI.3134-20.2021},
issn = {0270-6474, 1529-2401},
year = {2021},
date = {2021-08-01},
urldate = {2024-01-31},
journal = {The Journal of Neuroscience},
volume = {41},
number = {31},
pages = {6684–6698},
keywords = {PROPixx},
pubstate = {published},
tppubtype = {article}
}
Schweitzer, Richard; Rolfs, Martin
Intrasaccadic motion streaks jump-start gaze correction Journal Article
In: Science Advances, vol. 7, no. 30, pp. eabf2218, 2021, ISSN: 2375-2548.
Abstract | Links | BibTeX | Tags: DATAPixx3, PROPixx, TRACKPixx3
@article{schweitzer_intrasaccadic_2021,
title = {Intrasaccadic motion streaks jump-start gaze correction},
author = {Richard Schweitzer and Martin Rolfs},
url = {https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8302125/},
doi = {10.1126/sciadv.abf2218},
issn = {2375-2548},
year = {2021},
date = {2021-07-01},
urldate = {2024-01-08},
journal = {Science Advances},
volume = {7},
number = {30},
pages = {eabf2218},
abstract = {Rapid eye movements entail retinal motion streaks that provide spatiotemporal continuity of objects and guide gaze correction., Rapid eye movements (saccades) incessantly shift objects across the retina. To establish object correspondence, the visual system is thought to match surface features of objects across saccades. Here, we show that an object’s intrasaccadic retinal trace—a signal previously considered unavailable to visual processing—facilitates this match making. Human observers made saccades to a cued target in a circular stimulus array. Using high-speed visual projection, we swiftly rotated this array during the eyes’ flight, displaying continuous intrasaccadic target motion. Observers’ saccades landed between the target and a distractor, prompting secondary saccades. Independently of the availability of object features, which we controlled tightly, target motion increased the rate and reduced the latency of gaze-correcting saccades to the initial presaccadic target, in particular when the target’s stimulus features incidentally gave rise to efficient motion streaks. These results suggest that intrasaccadic visual information informs the establishment of object correspondence and jump-starts gaze correction.},
keywords = {DATAPixx3, PROPixx, TRACKPixx3},
pubstate = {published},
tppubtype = {article}
}
Neumann, Antonia; Breher, Katharina; Wahl, Siegfried
Effect of short-wavelength light emitting screen technologies on human contrast sensitivity Journal Article
In: Investigative Ophthalmology & Visual Science, vol. 62, no. 8, pp. 1375, 2021, ISSN: 1552-5783.
Abstract | BibTeX | Tags: VIEWPixx3D
@article{neumann_effect_2021,
title = {Effect of short-wavelength light emitting screen technologies on human contrast sensitivity},
author = {Antonia Neumann and Katharina Breher and Siegfried Wahl},
issn = {1552-5783},
year = {2021},
date = {2021-06-01},
journal = {Investigative Ophthalmology & Visual Science},
volume = {62},
number = {8},
pages = {1375},
abstract = {Blue light (short-wavelength light) is a part of the visible electromagnetic light spectrum for the human eye. Short wavelengths have a higher energetic radiation compared to middle and long wavelengths, which might have multiple effects on retinal processes. This study was conducted to investigate whether commonly available screen technologies are able to elicit changes in contrast sensitivity (CS) and therefore may be used to control myopia eventually. In total, 30 right eyes were randomly stimulated with light of different wavelengths: ∼480 nm, ∼530 nm, ∼630 nm and polychromatic light ∼380-780 nm, for 3 min each presented on a liquid crystal display, the ViewPixx/3D (VPixx Technologies, Saint-Bruno, Canada). Light stimulation was performed full field (FF) and only on the optic nerve head (ONH). CS was measured before any stimulation as reference and after each stimulation condition using a new and time-efficient CS test with Gabor patches and the method of adjustment for 3, 6, 12, 18 and 24 cycles per degree (cpd). Changes in CS after stimulation conditions were analyzed by three-way repeated measures analysis of variance. In a priorly conducted complementary study on a subset of five participants, the new CS test was verified to a validated CS test regarding agreement, repeatability (COR) and time. The new CS test was used with 3 trials per spatial frequency (interclass correlation coefficient = 0.94},
keywords = {VIEWPixx3D},
pubstate = {published},
tppubtype = {article}
}
Suess, Nina; Hartmann, Thomas; Weisz, Nathan
Differential attention-dependent adjustment of frequency, power and phase in primary sensory and frontoparietal areas Journal Article
In: Cortex, vol. 137, pp. 179–193, 2021, ISSN: 0010-9452.
Abstract | Links | BibTeX | Tags: PROPixx, RESPONSEPixxMRI, SOUNDPixx
@article{suess_differential_2021,
title = {Differential attention-dependent adjustment of frequency, power and phase in primary sensory and frontoparietal areas},
author = {Nina Suess and Thomas Hartmann and Nathan Weisz},
url = {https://www.sciencedirect.com/science/article/pii/S0010945221000320},
doi = {10.1016/j.cortex.2021.01.008},
issn = {0010-9452},
year = {2021},
date = {2021-04-01},
urldate = {2023-12-21},
journal = {Cortex},
volume = {137},
pages = {179–193},
abstract = {Continuously prioritizing behaviourally relevant information from the environment for improved stimulus processing is a crucial function of attention. In the current MEG study, we investigated how ongoing oscillatory activity of both sensory and non-sensory brain regions are differentially impacted by attentional focus. Low-frequency phase alignment of neural activity in primary sensory areas, with respect to attended/ignored features has been suggested to support top-down prioritization. However, phase adjustment in frontoparietal regions has not been widely studied, despite general implication of these in top-down selection of information. To investigate this, we let participants perform an established intermodal selective attention task, where low-frequency auditory (1.6 Hz) and visual (1.8 Hz) stimuli were presented simultaneously. We instructed them to either attend to the auditory or to the visual stimuli and to detect targets while ignoring the other stimulus stream. As expected, the strongest phase adjustment was observed in primary sensory regions for auditory and for visual stimulation, independent of attentional focus. We found greater differences in phase locking between attended and ignored stimulation for the visual modality. Interestingly, auditory temporal regions show small but significant attention-dependent neural entrainment even for visual stimulation. Extending findings from invasive recordings in non-human primates, we demonstrate an effect of attentional focus on the phase of the entrained oscillations in auditory and visual cortex which may be driven by phase locked increases of induced power. While sensory areas adjusted the phase of the respective stimulation frequencies, attentional focus adjusted the peak frequencies in nonsensory areas. Spatially these areas show a striking overlap with core regions of the dorsal attention network and the frontoparietal network. This suggests that these areas prioritize the attended modality by optimally exploiting the temporal structure of stimulation. Overall, our study complements and extends previous work by showing a differential effect of attentional focus on entrained oscillations (or phase adjustment) in primary sensory areas and frontoparietal areas.},
keywords = {PROPixx, RESPONSEPixxMRI, SOUNDPixx},
pubstate = {published},
tppubtype = {article}
}
Drijvers, Linda; Jensen, Ole; Spaak, Eelke
Rapid invisible frequency tagging reveals nonlinear integration of auditory and visual information Journal Article
In: Human Brain Mapping, vol. 42, no. 4, pp. 1138–1152, 2021, ISSN: 1097-0193, (_eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1002/hbm.25282).
Abstract | Links | BibTeX | Tags: PROPixx
@article{drijvers_rapid_2021,
title = {Rapid invisible frequency tagging reveals nonlinear integration of auditory and visual information},
author = {Linda Drijvers and Ole Jensen and Eelke Spaak},
url = {https://onlinelibrary.wiley.com/doi/abs/10.1002/hbm.25282},
doi = {10.1002/hbm.25282},
issn = {1097-0193},
year = {2021},
date = {2021-01-01},
urldate = {2024-01-31},
journal = {Human Brain Mapping},
volume = {42},
number = {4},
pages = {1138–1152},
abstract = {During communication in real-life settings, the brain integrates information from auditory and visual modalities to form a unified percept of our environment. In the current magnetoencephalography (MEG) study, we used rapid invisible frequency tagging (RIFT) to generate steady-state evoked fields and investigated the integration of audiovisual information in a semantic context. We presented participants with videos of an actress uttering action verbs (auditory; tagged at 61 Hz) accompanied by a gesture (visual; tagged at 68 Hz, using a projector with a 1,440 Hz refresh rate). Integration difficulty was manipulated by lower-order auditory factors (clear/degraded speech) and higher-order visual factors (congruent/incongruent gesture). We identified MEG spectral peaks at the individual (61/68 Hz) tagging frequencies. We furthermore observed a peak at the intermodulation frequency of the auditory and visually tagged signals (fvisual − fauditory = 7 Hz), specifically when lower-order integration was easiest because signal quality was optimal. This intermodulation peak is a signature of nonlinear audiovisual integration, and was strongest in left inferior frontal gyrus and left temporal regions; areas known to be involved in speech-gesture integration. The enhanced power at the intermodulation frequency thus reflects the ease of lower-order audiovisual integration and demonstrates that speech-gesture information interacts in higher-order language areas. Furthermore, we provide a proof-of-principle of the use of RIFT to study the integration of audiovisual stimuli, in relation to, for instance, semantic context.},
note = {_eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1002/hbm.25282},
keywords = {PROPixx},
pubstate = {published},
tppubtype = {article}
}
Fabius, Jasper H.; Fracasso, Alessio; Acunzo, David J.; der Stigchel, Stefan Van; Melcher, David
Low-Level Visual Information Is Maintained across Saccades, Allowing for a Postsaccadic Handoff between Visual Areas Journal Article
In: Journal of Neuroscience, vol. 40, no. 49, pp. 9476–9486, 2020, ISSN: 0270-6474, 1529-2401, (Publisher: Society for Neuroscience Section: Research Articles).
Abstract | Links | BibTeX | Tags: PROPixx, RESPONSEPixxMRI
@article{fabius_low-level_2020,
title = {Low-Level Visual Information Is Maintained across Saccades, Allowing for a Postsaccadic Handoff between Visual Areas},
author = {Jasper H. Fabius and Alessio Fracasso and David J. Acunzo and Stefan Van der Stigchel and David Melcher},
url = {https://www.jneurosci.org/content/40/49/9476},
doi = {10.1523/JNEUROSCI.1169-20.2020},
issn = {0270-6474, 1529-2401},
year = {2020},
date = {2020-12-01},
urldate = {2023-12-21},
journal = {Journal of Neuroscience},
volume = {40},
number = {49},
pages = {9476–9486},
abstract = {Experience seems continuous and detailed despite saccadic eye movements changing retinal input several times per second. There is debate whether neural signals related to updating across saccades contain information about stimulus features, or only location pointers without visual details. We investigated the time course of low-level visual information processing across saccades by decoding the spatial frequency of a stationary stimulus that changed from one visual hemifield to the other because of a horizontal saccadic eye movement. We recorded magnetoencephalography while human subjects (both sexes) monitored the orientation of a grating stimulus, making spatial frequency task irrelevant. Separate trials, in which subjects maintained fixation, were used to train a classifier, whose performance was then tested on saccade trials. Decoding performance showed that spatial frequency information of the presaccadic stimulus remained present for ∼200 ms after the saccade, transcending retinotopic specificity. Postsaccadic information ramped up rapidly after saccade offset. There was an overlap of over 100 ms during which decoding was significant from both presaccadic and postsaccadic processing areas. This suggests that the apparent richness of perception across saccades may be supported by the continuous availability of low-level information with a “soft handoff” of information during the initial processing sweep of the new fixation.
SIGNIFICANCE STATEMENT Saccades create frequent discontinuities in visual input, yet perception appears stable and continuous. How is this discontinuous input processed resulting in visual stability? Previous studies have focused on presaccadic remapping. Here we examined the time course of processing of low-level visual information (spatial frequency) across saccades with magnetoencephalography. The results suggest that spatial frequency information is not predictively remapped but also is not discarded. Instead, they suggest a soft handoff over time between different visual areas, making this information continuously available across the saccade. Information about the presaccadic stimulus remains available, while the information about the postsaccadic stimulus has also become available. The simultaneous availability of both the presaccadic and postsaccadic information could enable rich and continuous perception across saccades.},
note = {Publisher: Society for Neuroscience
Section: Research Articles},
keywords = {PROPixx, RESPONSEPixxMRI},
pubstate = {published},
tppubtype = {article}
}
SIGNIFICANCE STATEMENT Saccades create frequent discontinuities in visual input, yet perception appears stable and continuous. How is this discontinuous input processed resulting in visual stability? Previous studies have focused on presaccadic remapping. Here we examined the time course of processing of low-level visual information (spatial frequency) across saccades with magnetoencephalography. The results suggest that spatial frequency information is not predictively remapped but also is not discarded. Instead, they suggest a soft handoff over time between different visual areas, making this information continuously available across the saccade. Information about the presaccadic stimulus remains available, while the information about the postsaccadic stimulus has also become available. The simultaneous availability of both the presaccadic and postsaccadic information could enable rich and continuous perception across saccades.
Giari, Giuliano; Leonardelli, Elisa; Tao, Yuan; Machado, Mayara; Fairhall, Scott L.
Spatiotemporal properties of the neural representation of conceptual content for words and pictures – an MEG study Journal Article
In: NeuroImage, vol. 219, pp. 116913, 2020, ISSN: 1053-8119.
Abstract | Links | BibTeX | Tags: PROPixx, RESPONSEPixxMRI
@article{giari_spatiotemporal_2020,
title = {Spatiotemporal properties of the neural representation of conceptual content for words and pictures – an MEG study},
author = {Giuliano Giari and Elisa Leonardelli and Yuan Tao and Mayara Machado and Scott L. Fairhall},
url = {https://www.sciencedirect.com/science/article/pii/S1053811920303992},
doi = {10.1016/j.neuroimage.2020.116913},
issn = {1053-8119},
year = {2020},
date = {2020-10-01},
urldate = {2023-12-21},
journal = {NeuroImage},
volume = {219},
pages = {116913},
abstract = {The entwined nature of perceptual and conceptual processes renders an understanding of the interplay between perceptual recognition and conceptual access a continuing challenge. Here, to disentangle perceptual and conceptual processing in the brain, we combine magnetoencephalography (MEG), picture and word presentation and representational similarity analysis (RSA). We replicate previous findings of early and robust sensitivity to semantic distances between objects presented as pictures and show earlier (textasciitilde105 msec), but not later, representations can be accounted for by contemporary computer models of visual similarity (AlexNet). Conceptual content for word stimuli is reliably present in two temporal clusters, the first ranging from 230 to 335 msec, the second from 360 to 585 msec. The time-course of picture induced semantic content and the spatial location of conceptual representation were highly convergent, and the spatial distribution of both differed from that of words. While this may reflect differences in picture and word induced conceptual access, this underscores potential confounds in visual perceptual and conceptual processing. On the other hand, using the stringent criterion that neural and conceptual spaces must align, the robust representation of semantic content by 230–240 msec for visually unconfounded word stimuli significantly advances estimates of the timeline of semantic access and its orthographic and lexical precursors.},
keywords = {PROPixx, RESPONSEPixxMRI},
pubstate = {published},
tppubtype = {article}
}
Wutz, Andreas; Zazio, Agnese; Weisz, Nathan
Oscillatory Bursts in Parietal Cortex Reflect Dynamic Attention between Multiple Objects and Ensembles Journal Article
In: Journal of Neuroscience, vol. 40, no. 36, pp. 6927–6937, 2020, ISSN: 0270-6474, 1529-2401, (Publisher: Society for Neuroscience Section: Research Articles).
Abstract | Links | BibTeX | Tags: DATAPixx, PROPixx, TRACKPixx3
@article{wutz_oscillatory_2020,
title = {Oscillatory Bursts in Parietal Cortex Reflect Dynamic Attention between Multiple Objects and Ensembles},
author = {Andreas Wutz and Agnese Zazio and Nathan Weisz},
url = {https://www.jneurosci.org/content/40/36/6927},
doi = {10.1523/JNEUROSCI.0231-20.2020},
issn = {0270-6474, 1529-2401},
year = {2020},
date = {2020-09-01},
urldate = {2024-01-16},
journal = {Journal of Neuroscience},
volume = {40},
number = {36},
pages = {6927–6937},
abstract = {The visual system uses two complimentary strategies to process multiple objects simultaneously within a scene and update their spatial positions in real time. It either uses selective attention to individuate a complex, dynamic scene into a few focal objects (i.e., object individuation), or it represents multiple objects as an ensemble by distributing attention more globally across the scene (i.e., ensemble grouping). Neural oscillations may be a key signature for focal object individuation versus distributed ensemble grouping, because they are thought to regulate neural excitability over visual areas through inhibitory control mechanisms. We recorded whole-head MEG data during a multiple-object tracking paradigm, in which human participants (13 female, 11 male) switched between different instructions for object individuation and ensemble grouping on different trials. The stimuli, responses, and the demand to keep track of multiple spatial locations over time were held constant between the two conditions. We observed increased α-band power (9-13 Hz) packed into oscillatory bursts in bilateral inferior parietal cortex during multiple-object processing. Single-trial analysis revealed greater burst occurrences on object individuation versus ensemble grouping trials. By contrast, we found no differences using standard analyses on across-trials averaged α-band power. Moreover, the bursting effects occurred only below/at, but not above, the typical capacity limits for multiple-object processing (at ∼4 objects). Our findings reveal the real-time neural correlates underlying the dynamic processing of multiple-object scenarios, which are modulated by grouping strategies and capacity. They support a rhythmic, α-pulsed organization of dynamic attention to multiple objects and ensembles.
SIGNIFICANCE STATEMENT Dynamic multiple-object scenarios are an important problem in real-world and computer vision. They require keeping track of multiple objects as they move through space and time. Such problems can be solved in two ways: One can individuate a scene object by object, or alternatively group objects into ensembles. We observed greater occurrences of α-oscillatory burst events in parietal cortex for processing objects versus ensembles and below/at versus above processing capacity. These results demonstrate a unique top-down mechanism by which the brain dynamically adjusts its computational level between objects and ensembles. They help to explain how the brain copes with its capacity limitations in real-time environments and may lead the way to technological innovations for time-critical video analysis in computer vision.},
note = {Publisher: Society for Neuroscience
Section: Research Articles},
keywords = {DATAPixx, PROPixx, TRACKPixx3},
pubstate = {published},
tppubtype = {article}
}
SIGNIFICANCE STATEMENT Dynamic multiple-object scenarios are an important problem in real-world and computer vision. They require keeping track of multiple objects as they move through space and time. Such problems can be solved in two ways: One can individuate a scene object by object, or alternatively group objects into ensembles. We observed greater occurrences of α-oscillatory burst events in parietal cortex for processing objects versus ensembles and below/at versus above processing capacity. These results demonstrate a unique top-down mechanism by which the brain dynamically adjusts its computational level between objects and ensembles. They help to explain how the brain copes with its capacity limitations in real-time environments and may lead the way to technological innovations for time-critical video analysis in computer vision.
Schweitzer, Richard; Rolfs, Martin
An adaptive algorithm for fast and reliable online saccade detection Journal Article
In: Behavior Research Methods, vol. 52, no. 3, pp. 1122–1139, 2020, ISSN: 1554-3528.
Abstract | Links | BibTeX | Tags: DATAPixx3, PROPixx, TRACKPixx3
@article{schweitzer_adaptive_2020,
title = {An adaptive algorithm for fast and reliable online saccade detection},
author = {Richard Schweitzer and Martin Rolfs},
url = {https://doi.org/10.3758/s13428-019-01304-3},
doi = {10.3758/s13428-019-01304-3},
issn = {1554-3528},
year = {2020},
date = {2020-06-01},
urldate = {2024-01-08},
journal = {Behavior Research Methods},
volume = {52},
number = {3},
pages = {1122–1139},
abstract = {To investigate visual perception around the time of eye movements, vision scientists manipulate stimuli contingent upon the onset of a saccade. For these experimental paradigms, timing is especially crucial, because saccade offset imposes a deadline on the display change. Although efficient online saccade detection can greatly improve timing, most algorithms rely on spatial-boundary techniques or absolute-velocity thresholds, which both suffer from weaknesses: late detections and false alarms, respectively. We propose an adaptive, velocity-based algorithm for online saccade detection that surpasses both standard techniques in speed and accuracy and allows the user to freely define the detection criteria. Inspired by the Engbert–Kliegl algorithm for microsaccade detection, our algorithm computes two-dimensional velocity thresholds from variance in the preceding fixation samples, while compensating for noisy or missing data samples. An optional direction criterion limits detection to the instructed saccade direction, further increasing robustness. We validated the algorithm by simulating its performance on a large saccade dataset and found that high detection accuracy (false-alarm rates of < 1%) could be achieved with detection latencies of only 3 ms. High accuracy was maintained even under simulated high-noise conditions. To demonstrate that purely intrasaccadic presentations are technically feasible, we devised an experimental test in which a Gabor patch drifted at saccadic peak velocities. Whereas this stimulus was invisible when presented during fixation, observers reliably detected it during saccades. Photodiode measurements verified that—including all system delays—the stimuli were physically displayed on average 20 ms after saccade onset. Thus, the proposed algorithm provides a valuable tool for gaze-contingent paradigms.},
keywords = {DATAPixx3, PROPixx, TRACKPixx3},
pubstate = {published},
tppubtype = {article}
}
Schweitzer, Richard; Rolfs, Martin
Intra-saccadic motion streaks as cues to linking object locations across saccades Journal Article
In: Journal of Vision, vol. 20, no. 4, pp. 17, 2020, ISSN: 1534-7362.
Abstract | Links | BibTeX | Tags: PROPixx
@article{schweitzer_intra-saccadic_2020,
title = {Intra-saccadic motion streaks as cues to linking object locations across saccades},
author = {Richard Schweitzer and Martin Rolfs},
url = {https://doi.org/10.1167/jov.20.4.17},
doi = {10.1167/jov.20.4.17},
issn = {1534-7362},
year = {2020},
date = {2020-04-01},
urldate = {2024-01-08},
journal = {Journal of Vision},
volume = {20},
number = {4},
pages = {17},
abstract = {When visual objects shift rapidly across the retina, they produce motion blur. Intra-saccadic visual signals, caused incessantly by our own saccades, are thought to be eliminated at early stages of visual processing. Here we investigate whether they are still available to the visual system and could—in principle—be used as cues for localizing objects as they change locations on the retina. Using a high-speed projection system, we developed a trans-saccadic identification task in which brief but continuous intra-saccadic object motion was key to successful performance. Observers made a saccade to a target stimulus that moved rapidly either up or down, strictly during the eye movement. Just as the target reached its final position, an identical distractor stimulus appeared on the opposite side, resulting in a display of two identical stimuli upon saccade landing. Observers had to identify the original target using the only available clue: the target's intra-saccadic movement. In an additional replay condition, we presented the observers’ own intra-saccadic retinal stimulus trajectories during fixation. Compared to the replay condition, task performance was impaired during saccades but recovered fully when a post-saccadic blank was introduced. Reverse regression analyses and confirmatory experiments showed that performance increased markedly when targets had long movement durations, low spatial frequencies, and orientations parallel to their retinal trajectory—features that promote intra-saccadic motion streaks. Although the potential functional role of intra-saccadic visual signals is still unclear, our results suggest that they could provide cues to tracking objects that rapidly change locations across saccades.},
keywords = {PROPixx},
pubstate = {published},
tppubtype = {article}
}
Shenfield, Lucienne; Beanland, Vanessa; Apthorp, Deborah
Temporal predictability does not impact attentional blink performance: effects of fixed vs. random inter-trial intervals Journal Article
In: PeerJ, vol. 8, pp. e8677, 2020, ISSN: 2167-8359.
Abstract | Links | BibTeX | Tags: VIEWPixx
@article{shenfield_temporal_2020,
title = {Temporal predictability does not impact attentional blink performance: effects of fixed vs. random inter-trial intervals},
author = {Lucienne Shenfield and Vanessa Beanland and Deborah Apthorp},
url = {https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7060903/},
doi = {10.7717/peerj.8677},
issn = {2167-8359},
year = {2020},
date = {2020-03-01},
urldate = {2024-01-18},
journal = {PeerJ},
volume = {8},
pages = {e8677},
abstract = {Background
Does the inclusion of a randomized inter-trial interval (ITI) impact performance on an Attentional Blink (AB) task? The AB phenomenon is often used as a test of transient attention (); however, it is unclear whether incorporating aspects of sustained attention, by implementing a randomized ITI, would impact task performance. The current research sought to investigate this, by contrasting a standard version of the AB task with a random ITI version to determine whether performance changed, reflecting a change in difficulty, engagement, or motivation.
Method Thirty university students (21 female; age range 18–57},
keywords = {VIEWPixx},
pubstate = {published},
tppubtype = {article}
}
Does the inclusion of a randomized inter-trial interval (ITI) impact performance on an Attentional Blink (AB) task? The AB phenomenon is often used as a test of transient attention (); however, it is unclear whether incorporating aspects of sustained attention, by implementing a randomized ITI, would impact task performance. The current research sought to investigate this, by contrasting a standard version of the AB task with a random ITI version to determine whether performance changed, reflecting a change in difficulty, engagement, or motivation.
Method Thirty university students (21 female; age range 18–57
Barchiesi, Guido; Demarchi, Gianpaolo; Wilhelm, Frank H.; Hauswald, Anne; Sanchez, Gaëtan; Weisz, Nathan
Head magnetomyography (hMMG): A novel approach to monitor face and whole head muscular activity Journal Article
In: Psychophysiology, vol. 57, no. 3, pp. e13507, 2020, ISSN: 0048-5772.
Abstract | Links | BibTeX | Tags: PROPixx
@article{barchiesi_head_2020,
title = {Head magnetomyography (hMMG): A novel approach to monitor face and whole head muscular activity},
author = {Guido Barchiesi and Gianpaolo Demarchi and Frank H. Wilhelm and Anne Hauswald and Gaëtan Sanchez and Nathan Weisz},
url = {https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7027552/},
doi = {10.1111/psyp.13507},
issn = {0048-5772},
year = {2020},
date = {2020-03-01},
urldate = {2024-01-03},
journal = {Psychophysiology},
volume = {57},
number = {3},
pages = {e13507},
abstract = {Muscular activity recording is of high basic science and clinical relevance and is typically achieved using electromyography (EMG). While providing detailed information about the state of a specific muscle, this technique has limitations such as the need for a priori assumptions about electrode placement and difficulty with recording muscular activity patterns from extended body areas at once. For head and face muscle activity, the present work aimed to overcome these restrictions by exploiting magnetoencephalography (MEG) as a whole head myographic recorder (head magnetomyography, hMMG). This is in contrast to common MEG studies, which treat muscular activity as artifact in electromagnetic brain activity. In a first proof‐of‐concept step, participants imitated emotional facial expressions performed by a model. Exploiting source projection algorithms, we were able to reconstruct muscular activity, showing spatial activation patterns in accord with the hypothesized muscular contractions. Going one step further, participants passively observed affective pictures with negative, neutral, or positive valence. Applying multivariate pattern analysis to the reconstructed hMMG signal, we were able to decode above chance the valence category of the presented pictures. Underlining the potential of hMMG, a searchlight analysis revealed that generally neglected neck muscles exhibit information on stimulus valence. Results confirm the utility of hMMG as a whole head electromyographic recorder to quantify muscular activation patterns including muscular regions that are typically not recorded with EMG. This key advantage beyond conventional EMG has substantial scientific and clinical potential., We present an innovative method called head magnetomyography (hMMG), which exploits magnetoencephalography (MEG) as a whole head electromyographic (EMG) recorder. Differently from the typical EMG recording, which needs an a priori selection of the placement of the electrodes, hMMG is able to detect muscular activity from many regions of the face and head simultaneously, including typically overlooked muscles. Our data show that hMMG can readily serve researchers in the emotion field and hold further scientific as well as clinical promise.},
keywords = {PROPixx},
pubstate = {published},
tppubtype = {article}
}
Sanchez, Gaëtan; Hartmann, Thomas; Fuscà, Marco; Demarchi, Gianpaolo; Weisz, Nathan
Decoding across sensory modalities reveals common supramodal signatures of conscious perception Journal Article
In: Proceedings of the National Academy of Sciences, vol. 117, no. 13, pp. 7437–7446, 2020, (Publisher: Proceedings of the National Academy of Sciences).
Abstract | Links | BibTeX | Tags: PROPixx, SOUNDPixx
@article{sanchez_decoding_2020,
title = {Decoding across sensory modalities reveals common supramodal signatures of conscious perception},
author = {Gaëtan Sanchez and Thomas Hartmann and Marco Fuscà and Gianpaolo Demarchi and Nathan Weisz},
url = {https://www.pnas.org/doi/10.1073/pnas.1912584117},
doi = {10.1073/pnas.1912584117},
year = {2020},
date = {2020-03-01},
urldate = {2024-01-02},
journal = {Proceedings of the National Academy of Sciences},
volume = {117},
number = {13},
pages = {7437–7446},
abstract = {An increasing number of studies highlight common brain regions and processes in mediating conscious sensory experience. While most studies have been performed in the visual modality, it is implicitly assumed that similar processes are involved in other sensory modalities. However, the existence of supramodal neural processes related to conscious perception has not been convincingly shown so far. Here, we aim to directly address this issue by investigating whether neural correlates of conscious perception in one modality can predict conscious perception in a different modality. In two separate experiments, we presented participants with successive blocks of near-threshold tasks involving subjective reports of tactile, visual, or auditory stimuli during the same magnetoencephalography (MEG) acquisition. Using decoding analysis in the poststimulus period between sensory modalities, our first experiment uncovered supramodal spatiotemporal neural activity patterns predicting conscious perception of the feeble stimulation. Strikingly, these supramodal patterns included activity in primary sensory regions not directly relevant to the task (e.g., neural activity in visual cortex predicting conscious perception of auditory near-threshold stimulation). We carefully replicate our results in a control experiment that furthermore show that the relevant patterns are independent of the type of report (i.e., whether conscious perception was reported by pressing or withholding a button press). Using standard paradigms for probing neural correlates of conscious perception, our findings reveal a common signature of conscious access across sensory modalities and illustrate the temporally late and widespread broadcasting of neural representations, even into task-unrelated primary sensory processing regions.},
note = {Publisher: Proceedings of the National Academy of Sciences},
keywords = {PROPixx, SOUNDPixx},
pubstate = {published},
tppubtype = {article}
}
Zhigalov, Alexander; Jensen, Ole
Alpha oscillations do not implement gain control in early visual cortex but rather gating in parieto-occipital regions Journal Article
In: Human Brain Mapping, vol. 41, no. 18, pp. 5176–5186, 2020, ISSN: 1097-0193, (_eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1002/hbm.25183).
Abstract | Links | BibTeX | Tags:
@article{zhigalov_alpha_2020,
title = {Alpha oscillations do not implement gain control in early visual cortex but rather gating in parieto-occipital regions},
author = {Alexander Zhigalov and Ole Jensen},
url = {https://onlinelibrary.wiley.com/doi/abs/10.1002/hbm.25183},
doi = {10.1002/hbm.25183},
issn = {1097-0193},
year = {2020},
date = {2020-01-01},
urldate = {2024-01-31},
journal = {Human Brain Mapping},
volume = {41},
number = {18},
pages = {5176–5186},
abstract = {Spatial attention provides a mechanism for, respectively, enhancing relevant and suppressing irrelevant information. While it is well established that attention modulates oscillations in the alpha band, it remains unclear if alpha oscillations are involved in directly modulating the neuronal excitability associated with the allocation of spatial attention. In this study, in humans, we utilized a novel broadband frequency (60–70 Hz) tagging paradigm to quantify neuronal excitability in relation to alpha oscillations in a spatial attention paradigm. We used magnetoencephalography to characterize ongoing brain activity as it allows for localizing the sources of both the alpha and frequency tagging responses. We found that attentional modulation of alpha power and the frequency tagging response are uncorrelated over trials. Importantly, the neuronal sources of the tagging response were localized in early visual cortex (V1) whereas the sources of the alpha activity were identified around parieto-occipital sulcus. Moreover, we found that attention did not modulate the latency of the frequency tagged responses. Our findings point to alpha band oscillations serving a downstream gating role rather than implementing gain control of excitability in early visual regions.},
note = {_eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1002/hbm.25183},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Read, Jenny C. A.; Wong, Zhen Yi; Yek, Xinye; Wong, Ying Xin; Bachtoula, Omar; Llamas-Cornejo, Ichasus; Serrano-Pedraza, Ignacio
ASTEROID stereotest v1.0: lower stereo thresholds using smaller, denser and faster dots Journal Article
In: Ophthalmic and Physiological Optics, vol. 40, no. 6, pp. 815–827, 2020, ISSN: 1475-1313, (_eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/opo.12737).
Abstract | Links | BibTeX | Tags: 3DPolarizer, PROPixx
@article{read_asteroid_2020,
title = {ASTEROID stereotest v1.0: lower stereo thresholds using smaller, denser and faster dots},
author = {Jenny C. A. Read and Zhen Yi Wong and Xinye Yek and Ying Xin Wong and Omar Bachtoula and Ichasus Llamas-Cornejo and Ignacio Serrano-Pedraza},
url = {https://onlinelibrary.wiley.com/doi/abs/10.1111/opo.12737},
doi = {10.1111/opo.12737},
issn = {1475-1313},
year = {2020},
date = {2020-01-01},
urldate = {2024-01-19},
journal = {Ophthalmic and Physiological Optics},
volume = {40},
number = {6},
pages = {815–827},
abstract = {Purpose In 2019, we described ASTEROID, a new stereotest run on a 3D tablet computer which involves a four-alternative disparity detection task on a dynamic random-dot stereogram. Stereo thresholds measured with ASTEROID were well correlated with, but systematically higher than (by a factor of around 1.5), thresholds measured with previous laboratory stereotests or the Randot Preschool clinical stereotest. We speculated that this might be due to the relatively large, sparse dots used in ASTEROID v0.9. Here, we introduce and test the stereo thresholds and test-repeatability of the new ASTEROID v1.0, which uses precomputed images to allow stereograms made up of much smaller, denser dots. Methods Stereo thresholds and test/retest repeatability were tested and compared between the old and new versions of ASTEROID (n = 75) and the Randot Circles (n = 31) stereotest, in healthy young adults. Results Thresholds on ASTEROID v1.0 are lower (better) than on ASTEROID v0.9 by a factor of 1.4, and do not differ significantly from thresholds on the Randot Circles. Thresholds were roughly log-normally distributed with a mean of 1.54 log10 arcsec (35 arcsec) on ASTEROID v1.0 compared to 1.70 log10 arcsec (50 arcsec) on ASTEROID v0.9. The standard deviation between observers was the same for both versions, 0.32 log10 arcsec, corresponding to a factor of 2 above and below the mean. There was no difference between the versions in their test/retest repeatability, with 95% coefficient of repeatability = 0.46 log10 arcsec (a factor of 2.9 or 1.5 octaves) and a Pearson correlation of 0.8 (comparable to other clinical stereotests). Conclusion The poorer stereo thresholds previously reported with ASTEROID v0.9 appear to have been due to the relatively large, coarse dots and low density used, rather than to some other aspect of the technology. Employing the small dots and high density used in ASTEROID v1.0, thresholds and test/retest repeatability are similar to other clinical stereotests.},
note = {_eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/opo.12737},
keywords = {3DPolarizer, PROPixx},
pubstate = {published},
tppubtype = {article}
}