Open Access Publications
From research on the visual systems of turtles, to the perception of faces with or without makeup, to transaccadic perception and perceptual cycles in the brain– VPixx hardware and software solutions have supported research in vision science and beyond for over 20 years. We are immensely proud of the discoveries and accomplishments of our customers across the world.
On this page you will find a non-exhaustive list of peer-reviewed, open access publications citing VPixx tools dating back to 2003. Browse the list or use the tag filter to search for specific products. Note that we report the device used in the paper according to the authors; this may not accurately reflect the specific model of device used (e.g., VIEWPixx vs. VIEWPixx /3D). Nor do we guarantee the accuracy of published content. Please contact our team at [email protected] if you have any questions about a specific paper.
Curious about a specific application of our tools? Can’t find what you are looking for? Our staff scientists are happy to discuss paradigms and protocols using our equipment by email or video chat. Please contact us with your questions.
Want to have your work added to our library? Send us a message at [email protected] and we will add it. Your article must be peer-reviewed, open access, and it must indicate VPixx products were used in the research.
Use the search tool below to search for specific terms among the titles, authors and abstracts in our library.
Wittevrongel, Benjamin; Wolputte, Elia Van; Hulle, Marc M. Van
Code-modulated visual evoked potentials using fast stimulus presentation and spatiotemporal beamformer decoding Journal Article
In: Scientific Reports, vol. 7, no. 1, pp. 15037, 2017, ISSN: 2045-2322, (Number: 1 Publisher: Nature Publishing Group).
Abstract | Links | BibTeX | Tags: VIEWPixxEEG
@article{wittevrongel_code-modulated_2017,
title = {Code-modulated visual evoked potentials using fast stimulus presentation and spatiotemporal beamformer decoding},
author = {Benjamin Wittevrongel and Elia Van Wolputte and Marc M. Van Hulle},
url = {https://www.nature.com/articles/s41598-017-15373-x},
doi = {10.1038/s41598-017-15373-x},
issn = {2045-2322},
year = {2017},
date = {2017-11-01},
urldate = {2023-12-22},
journal = {Scientific Reports},
volume = {7},
number = {1},
pages = {15037},
abstract = {When encoding visual targets using various lagged versions of a pseudorandom binary sequence of luminance changes, the EEG signal recorded over the viewer’s occipital pole exhibits so-called code-modulated visual evoked potentials (cVEPs), the phase lags of which can be tied to these targets. The cVEP paradigm has enjoyed interest in the brain-computer interfacing (BCI) community for the reported high information transfer rates (ITR, in bits/min). In this study, we introduce a novel decoding algorithm based on spatiotemporal beamforming, and show that this algorithm is able to accurately identify the gazed target. Especially for a small number of repetitions of the coding sequence, our beamforming approach significantly outperforms an optimised support vector machine (SVM)-based classifier, which is considered state-of-the-art in cVEP-based BCI. In addition to the traditional 60 Hz stimulus presentation rate for the coding sequence, we also explore the 120 Hz rate, and show that the latter enables faster communication, with a maximal median ITR of 172.87 bits/min. Finally, we also report on a transition effect in the EEG signal following the onset of the stimulus sequence, and recommend to exclude the first 150 ms of the trials from decoding when relying on a single presentation of the stimulus sequence.},
note = {Number: 1
Publisher: Nature Publishing Group},
keywords = {VIEWPixxEEG},
pubstate = {published},
tppubtype = {article}
}
Hübner, Carolin; Schütz, Alexander C.
Numerosity estimation benefits from transsaccadic information integration Journal Article
In: Journal of Vision, vol. 17, no. 13, pp. 12, 2017, ISSN: 1534-7362.
Abstract | Links | BibTeX | Tags: PROPixx
@article{hubner_numerosity_2017,
title = {Numerosity estimation benefits from transsaccadic information integration},
author = {Carolin Hübner and Alexander C. Schütz},
url = {https://doi.org/10.1167/17.13.12},
doi = {10.1167/17.13.12},
issn = {1534-7362},
year = {2017},
date = {2017-11-01},
urldate = {2024-01-12},
journal = {Journal of Vision},
volume = {17},
number = {13},
pages = {12},
abstract = {Humans achieve a stable and homogeneous representation of their visual environment, although visual processing varies across the visual field. Here we investigated the circumstances under which peripheral and foveal information is integrated for numerosity estimation across saccades. We asked our participants to judge the number of black and white dots on a screen. Information was presented either in the periphery before a saccade, in the fovea after a saccade, or in both areas consecutively to measure transsaccadic integration. In contrast to previous findings, we found an underestimation of numerosity for foveal presentation and an overestimation for peripheral presentation. We used a maximum-likelihood model to predict accuracy and reliability in the transsaccadic condition based on peripheral and foveal values. We found near-optimal integration of peripheral and foveal information, consistently with previous findings about orientation integration. In three consecutive experiments, we disrupted object continuity between the peripheral and foveal presentations to probe the limits of transsaccadic integration. Even for global changes on our numerosity stimuli, no influence of object discontinuity was observed. Overall, our results suggest that transsaccadic integration is a robust mechanism that also works for complex visual features such as numerosity and is operative despite internal or external mismatches between foveal and peripheral information. Transsaccadic integration facilitates an accurate and reliable perception of our environment.},
keywords = {PROPixx},
pubstate = {published},
tppubtype = {article}
}
Seillier, Lenka; Lorenz, Corinna; Kawaguchi, Katsuhisa; Ott, Torben; Nieder, Andreas; Pourriahi, Paria; Nienborg, Hendrikje
Serotonin Decreases the Gain of Visual Responses in Awake Macaque V1 Journal Article
In: Journal of Neuroscience, vol. 37, no. 47, pp. 11390–11405, 2017, ISSN: 0270-6474, 1529-2401, (Publisher: Society for Neuroscience Section: Research Articles).
Abstract | Links | BibTeX | Tags: 3DPolarizer, PROPixx
@article{seillier_serotonin_2017,
title = {Serotonin Decreases the Gain of Visual Responses in Awake Macaque V1},
author = {Lenka Seillier and Corinna Lorenz and Katsuhisa Kawaguchi and Torben Ott and Andreas Nieder and Paria Pourriahi and Hendrikje Nienborg},
url = {https://www.jneurosci.org/content/37/47/11390},
doi = {10.1523/JNEUROSCI.1339-17.2017},
issn = {0270-6474, 1529-2401},
year = {2017},
date = {2017-11-01},
urldate = {2024-01-17},
journal = {Journal of Neuroscience},
volume = {37},
number = {47},
pages = {11390–11405},
abstract = {Serotonin, an important neuromodulator in the brain, is implicated in affective and cognitive functions. However, its role even for basic cortical processes is controversial. For example, in the mammalian primary visual cortex (V1), heterogenous serotonergic modulation has been observed in anesthetized animals. Here, we combined extracellular single-unit recordings with iontophoresis in awake animals. We examined the role of serotonin on well-defined tuning properties (orientation, spatial frequency, contrast, and size) in V1 of two male macaque monkeys. We find that in the awake macaque the modulatory effect of serotonin is surprisingly uniform: it causes a mainly multiplicative decrease of the visual responses and a slight increase in the stimulus-selective response latency. Moreover, serotonin neither systematically changes the selectivity or variability of the response, nor the interneuronal correlation unexplained by the stimulus (“noise-correlation”). The modulation by serotonin has qualitative similarities with that for a decrease in stimulus contrast, but differs quantitatively from decreasing contrast. It can be captured by a simple additive change to a threshold-linear spiking nonlinearity. Together, our results show that serotonin is well suited to control the response gain of neurons in V1 depending on the animal's behavioral or motivational context, complementing other known state-dependent gain-control mechanisms.
SIGNIFICANCE STATEMENT Serotonin is an important neuromodulator in the brain and a major target for drugs used to treat psychiatric disorders. Nonetheless, surprisingly little is known about how it shapes information processing in sensory areas. Here we examined the serotonergic modulation of visual processing in the primary visual cortex of awake behaving macaque monkeys. We found that serotonin mainly decreased the gain of the visual responses, without systematically changing their selectivity, variability, or covariability. This identifies a simple computational function of serotonin for state-dependent sensory processing, depending on the animal's affective or motivational state.},
note = {Publisher: Society for Neuroscience
Section: Research Articles},
keywords = {3DPolarizer, PROPixx},
pubstate = {published},
tppubtype = {article}
}
SIGNIFICANCE STATEMENT Serotonin is an important neuromodulator in the brain and a major target for drugs used to treat psychiatric disorders. Nonetheless, surprisingly little is known about how it shapes information processing in sensory areas. Here we examined the serotonergic modulation of visual processing in the primary visual cortex of awake behaving macaque monkeys. We found that serotonin mainly decreased the gain of the visual responses, without systematically changing their selectivity, variability, or covariability. This identifies a simple computational function of serotonin for state-dependent sensory processing, depending on the animal's affective or motivational state.
Smith, Danielle; Ropar, Danielle; Allen, Harriet A.
The Integration of Occlusion and Disparity Information for Judging Depth in Autism Spectrum Disorder Journal Article
In: Journal of Autism and Developmental Disorders, vol. 47, no. 10, pp. 3112–3124, 2017, ISSN: 1573-3432.
Abstract | Links | BibTeX | Tags: 3DPixx, DATAPixx
@article{smith_integration_2017,
title = {The Integration of Occlusion and Disparity Information for Judging Depth in Autism Spectrum Disorder},
author = {Danielle Smith and Danielle Ropar and Harriet A. Allen},
url = {https://doi.org/10.1007/s10803-017-3234-x},
doi = {10.1007/s10803-017-3234-x},
issn = {1573-3432},
year = {2017},
date = {2017-10-01},
urldate = {2024-01-18},
journal = {Journal of Autism and Developmental Disorders},
volume = {47},
number = {10},
pages = {3112–3124},
abstract = {In autism spectrum disorder (ASD), atypical integration of visual depth cues may be due to flattened perceptual priors or selective fusion. The current study attempts to disentangle these explanations by psychophysically assessing within-modality integration of ordinal (occlusion) and metric (disparity) depth cues while accounting for sensitivity to stereoscopic information. Participants included 22 individuals with ASD and 23 typically developing matched controls. Although adults with ASD were found to have significantly poorer stereoacuity, they were still able to automatically integrate conflicting depth cues, lending support to the idea that priors are intact in ASD. However, dissimilarities in response speed variability between the ASD and TD groups suggests that there may be differences in the perceptual decision-making aspect of the task.},
keywords = {3DPixx, DATAPixx},
pubstate = {published},
tppubtype = {article}
}
Poth, Christian; Horstmann, Gernot
Assessing the monitor warm-up time required before a psychological experiment can begin Journal Article
In: The Quantitative Methods for Psychology, vol. 13, pp. 166–173, 2017.
Links | BibTeX | Tags: XRitei1Pro
@article{poth_assessing_2017,
title = {Assessing the monitor warm-up time required before a psychological experiment can begin},
author = {Christian Poth and Gernot Horstmann},
doi = {10.20982/tqmp.13.3.p166},
year = {2017},
date = {2017-10-01},
journal = {The Quantitative Methods for Psychology},
volume = {13},
pages = {166–173},
keywords = {XRitei1Pro},
pubstate = {published},
tppubtype = {article}
}
Wallis, Thomas S. A.; Funke, Christina M.; Ecker, Alexander S.; Gatys, Leon A.; Wichmann, Felix A.; Bethge, Matthias
A parametric texture model based on deep convolutional features closely matches texture appearance for humans Journal Article
In: Journal of Vision, vol. 17, no. 12, pp. 5, 2017, ISSN: 1534-7362.
Abstract | Links | BibTeX | Tags: VIEWPixx3D
@article{wallis_parametric_2017,
title = {A parametric texture model based on deep convolutional features closely matches texture appearance for humans},
author = {Thomas S. A. Wallis and Christina M. Funke and Alexander S. Ecker and Leon A. Gatys and Felix A. Wichmann and Matthias Bethge},
url = {https://doi.org/10.1167/17.12.5},
doi = {10.1167/17.12.5},
issn = {1534-7362},
year = {2017},
date = {2017-10-01},
urldate = {2023-12-21},
journal = {Journal of Vision},
volume = {17},
number = {12},
pages = {5},
abstract = {Our visual environment is full of texture—“stuff” like cloth, bark, or gravel as distinct from “things” like dresses, trees, or paths—and humans are adept at perceiving subtle variations in material properties. To investigate image features important for texture perception, we psychophysically compare a recent parametric model of texture appearance (convolutional neural network [CNN] model) that uses the features encoded by a deep CNN (VGG-19) with two other models: the venerable Portilla and Simoncelli model and an extension of the CNN model in which the power spectrum is additionally matched. Observers discriminated model-generated textures from original natural textures in a spatial three-alternative oddity paradigm under two viewing conditions: when test patches were briefly presented to the near-periphery (“parafoveal”) and when observers were able to make eye movements to all three patches (“inspection”). Under parafoveal viewing, observers were unable to discriminate 10 of 12 original images from CNN model images, and remarkably, the simpler Portilla and Simoncelli model performed slightly better than the CNN model (11 textures). Under foveal inspection, matching CNN features captured appearance substantially better than the Portilla and Simoncelli model (nine compared to four textures), and including the power spectrum improved appearance matching for two of the three remaining textures. None of the models we test here could produce indiscriminable images for one of the 12 textures under the inspection condition. While deep CNN (VGG-19) features can often be used to synthesize textures that humans cannot discriminate from natural textures, there is currently no uniformly best model for all textures and viewing conditions.},
keywords = {VIEWPixx3D},
pubstate = {published},
tppubtype = {article}
}
Roudaia, Eugenie; Faubert, Jocelyn
Different effects of aging and gender on the temporal resolution in attentional tracking Journal Article
In: Journal of Vision, vol. 17, no. 11, pp. 1, 2017, ISSN: 1534-7362.
Abstract | Links | BibTeX | Tags: VIEWPixx
@article{roudaia_different_2017,
title = {Different effects of aging and gender on the temporal resolution in attentional tracking},
author = {Eugenie Roudaia and Jocelyn Faubert},
url = {https://doi.org/10.1167/17.11.1},
doi = {10.1167/17.11.1},
issn = {1534-7362},
year = {2017},
date = {2017-09-01},
urldate = {2024-01-02},
journal = {Journal of Vision},
volume = {17},
number = {11},
pages = {1},
abstract = {The current study examined the role of temporal resolution of attention in the decline in multiple object tracking abilities with healthy aging. The temporal resolution of attention is known to limit attentional tracking of one and multiple targets (Holcombe & Chen, 2013). Here, we examined whether aging is associated with a lower temporal resolution of attention when tracking one target, the efficiency of splitting attention across multiple targets, or both. Stimuli comprised three concentric rings containing five or 10 equally spaced dots. While maintaining central fixation, younger and older participants tracked a target dot on one, two, or three rings while the rings rotated around fixation in random directions for 5 s. Rotational speed was varied to estimate speed or temporal frequency thresholds in six conditions. Results showed that younger and older participants had similar temporal frequency thresholds for tracking one target, but the addition of one and two more targets reduced thresholds more in the older group compared to the younger group. Gender also affected performance, with men having higher temporal frequency thresholds than women, independently of the number of targets. These findings indicate that the temporal resolution of attention for a single target depends on gender but is not affected by aging, whereas aging specifically affects the efficiency of dividing attention across multiple targets.},
keywords = {VIEWPixx},
pubstate = {published},
tppubtype = {article}
}
Ross, Nicholas M.; Goettker, Alexander; Schütz, Alexander C.; Braun, Doris I.; Gegenfurtner, Karl R.
Discrimination of curvature from motion during smooth pursuit eye movements and fixation Journal Article
In: Journal of Neurophysiology, vol. 118, no. 3, pp. 1762–1774, 2017, ISSN: 0022-3077.
Abstract | Links | BibTeX | Tags: VIEWPixx
@article{ross_discrimination_2017,
title = {Discrimination of curvature from motion during smooth pursuit eye movements and fixation},
author = {Nicholas M. Ross and Alexander Goettker and Alexander C. Schütz and Doris I. Braun and Karl R. Gegenfurtner},
url = {https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5596124/},
doi = {10.1152/jn.00324.2017},
issn = {0022-3077},
year = {2017},
date = {2017-09-01},
urldate = {2024-01-02},
journal = {Journal of Neurophysiology},
volume = {118},
number = {3},
pages = {1762–1774},
abstract = {Even though motion trajectories in the real world are frequently curved, most studies of smooth pursuit and motion perception have investigated linear motion. We show that pursuit initially underestimates the curvature of target motion and is able to reproduce the target curvature ∼300 ms after pursuit onset. Temporal integration of target motion over longer periods is necessary for pursuit to reach the level of precision found in perceptual discrimination of curvature., Smooth pursuit and motion perception have mainly been investigated with stimuli moving along linear trajectories. Here we studied the quality of pursuit movements to curved motion trajectories in human observers and examined whether the pursuit responses would be sensitive enough to discriminate various degrees of curvature. In a two-interval forced-choice task subjects pursued a Gaussian blob moving along a curved trajectory and then indicated in which interval the curve was flatter. We also measured discrimination thresholds for the same curvatures during fixation. Motion curvature had some specific effects on smooth pursuit properties: trajectories with larger amounts of curvature elicited lower open-loop acceleration, lower pursuit gain, and larger catch-up saccades compared with less curved trajectories. Initially, target motion curvatures were underestimated; however, ∼300 ms after pursuit onset pursuit responses closely matched the actual curved trajectory. We calculated perceptual thresholds for curvature discrimination, which were on the order of 1.5 degrees of visual angle (°) for a 7.9° curvature standard. Oculometric sensitivity to curvature discrimination based on the whole pursuit trajectory was quite similar to perceptual performance. Oculometric thresholds based on smaller time windows were higher. Thus smooth pursuit can quite accurately follow moving targets with curved trajectories, but temporal integration over longer periods is necessary to reach perceptual thresholds for curvature discrimination., NEW & NOTEWORTHY Even though motion trajectories in the real world are frequently curved, most studies of smooth pursuit and motion perception have investigated linear motion. We show that pursuit initially underestimates the curvature of target motion and is able to reproduce the target curvature ∼300 ms after pursuit onset. Temporal integration of target motion over longer periods is necessary for pursuit to reach the level of precision found in perceptual discrimination of curvature.},
keywords = {VIEWPixx},
pubstate = {published},
tppubtype = {article}
}
Yates, Jacob L.; Park, Il Memming; Katz, Leor N.; Pillow, Jonathan W.; Huk, Alexander C.
Functional dissection of signal and noise in MT and LIP during decision-making Journal Article
In: Nature neuroscience, vol. 20, no. 9, pp. 1285–1292, 2017, ISSN: 1097-6256.
Abstract | Links | BibTeX | Tags: DATAPixx
@article{yates_functional_2017,
title = {Functional dissection of signal and noise in MT and LIP during decision-making},
author = {Jacob L. Yates and Il Memming Park and Leor N. Katz and Jonathan W. Pillow and Alexander C. Huk},
url = {https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5673485/},
doi = {10.1038/nn.4611},
issn = {1097-6256},
year = {2017},
date = {2017-09-01},
urldate = {2024-01-03},
journal = {Nature neuroscience},
volume = {20},
number = {9},
pages = {1285–1292},
abstract = {During perceptual decision making, responses in the middle temporal (MT) and lateral intraparietal (LIP) areas appear to map onto theoretically defined quantities, with MT representing instantaneous motion evidence and LIP reflecting the accumulated evidence. However, several aspects of the transformation between the two areas have not been empirically tested. We therefore performed multi-stage systems identification analyses of the simultaneous activity of MT and LIP during individual decisions. We found that monkeys based their choices on evidence presented in early epochs of the motion stimulus, and that substantial early weighting of motion was present in MT responses. LIP’s responses recapitulated MT’s early weighting and contained a choice-dependent buildup that was distinguishable from motion integration. Furthermore, trial-by-trial variability in LIP did not depend on MT activity. These results identify important deviations from the idealizations of MT and LIP and motivate inquiry into sensorimotor computations that may intervene between MT and LIP.},
keywords = {DATAPixx},
pubstate = {published},
tppubtype = {article}
}
Alais, David; Ho, Tam; Han, Shui’er; der Burg, Erik Van
A Matched Comparison Across Three Different Sensory Pairs of Cross-Modal Temporal Recalibration From Sustained and Transient Adaptation Journal Article
In: i-Perception, vol. 8, no. 4, pp. 2041669517718697, 2017, ISSN: 2041-6695, (Publisher: SAGE Publications).
Abstract | Links | BibTeX | Tags: DATAPixx, VIEWPixx
@article{alais_matched_2017,
title = {A Matched Comparison Across Three Different Sensory Pairs of Cross-Modal Temporal Recalibration From Sustained and Transient Adaptation},
author = {David Alais and Tam Ho and Shui’er Han and Erik Van der Burg},
url = {https://doi.org/10.1177/2041669517718697},
doi = {10.1177/2041669517718697},
issn = {2041-6695},
year = {2017},
date = {2017-08-01},
urldate = {2023-12-21},
journal = {i-Perception},
volume = {8},
number = {4},
pages = {2041669517718697},
abstract = {Sustained exposure to an asynchronous multisensory signal causes perceived simultaneity to shift in the direction of the leading component of the adapting stimulus. This is known as temporal recalibration, and recent evidence suggests that it can occur very rapidly, even after a single asynchronous audiovisual (AV) stimulus. However, this form of rapid recalibration appears to be unique to AV stimuli, in contrast to recalibration following sustained asynchronies which occurs with audiotactile (AT) and visuotactile (VT) stimuli. This study examines temporal recalibration to AV, VT and AT asynchrony with spatially collocated stimuli using a design that produces both sustained and inter-trial recalibration by combining the traditional sustained adaptation approach with an inter-trial analysis of sequential dependencies in an extended test period. Thus, we compare temporal recalibration to both sustained and transient asynchrony in three crossmodal combinations using the same design, stimuli and observers. The results reveal that prolonged exposure to asynchrony produced equivalent temporal recalibration for all combinations: AV, AT and VT. The pattern for rapid, inter-trial recalibration was very different. Rapid recalibration occurred strongly for AV stimuli, weakly for AT and did not occur at all for VT. For all sensory pairings, recalibration from sustained asynchrony decayed to baseline during the test phase while inter-trial recalibration was present and stable throughout testing, suggesting different mechanisms may underlie adaptation at long and short timescales.},
note = {Publisher: SAGE Publications},
keywords = {DATAPixx, VIEWPixx},
pubstate = {published},
tppubtype = {article}
}
Étienne, Cédric Jules; Arleo, Angelo; Allard, Rémy
Maximizing noise energy for noise-masking studies Journal Article
In: Behavior Research Methods, vol. 49, no. 4, pp. 1278–1290, 2017, ISSN: 1554-3528.
Abstract | Links | BibTeX | Tags: VIEWPixx
@article{jules_etienne_maximizing_2017,
title = {Maximizing noise energy for noise-masking studies},
author = {Cédric Jules Étienne and Angelo Arleo and Rémy Allard},
url = {https://doi.org/10.3758/s13428-016-0786-1},
doi = {10.3758/s13428-016-0786-1},
issn = {1554-3528},
year = {2017},
date = {2017-08-01},
urldate = {2024-01-12},
journal = {Behavior Research Methods},
volume = {49},
number = {4},
pages = {1278–1290},
abstract = {Noise-masking experiments are widely used to investigate visual functions. To be useful, noise generally needs to be strong enough to noticeably impair performance, but under some conditions, noise does not impair performance even when its contrast approaches the maximal displayable limit of 100 %. To extend the usefulness of noise-masking paradigms over a wider range of conditions, the present study developed a noise with great masking strength. There are two typical ways of increasing masking strength without exceeding the limited contrast range: use binary noise instead of Gaussian noise or filter out frequencies that are not relevant to the task (i.e., which can be removed without affecting performance). The present study combined these two approaches to further increase masking strength. We show that binarizing the noise after the filtering process substantially increases the energy at frequencies within the pass-band of the filter given equated total contrast ranges. A validation experiment showed that similar performances were obtained using binarized-filtered noise and filtered noise (given equated noise energy at the frequencies within the pass-band) suggesting that the binarization operation, which substantially reduced the contrast range, had no significant impact on performance. We conclude that binarized-filtered noise (and more generally, truncated-filtered noise) can substantially increase the energy of the noise at frequencies within the pass-band. Thus, given a limited contrast range, binarized-filtered noise can display higher energy levels than Gaussian noise and thereby widen the range of conditions over which noise-masking paradigms can be useful.},
keywords = {VIEWPixx},
pubstate = {published},
tppubtype = {article}
}
Mather, George; Sharman, Rebecca J.; Parsons, Todd
Visual adaptation alters the apparent speed of real-world actions Journal Article
In: Scientific Reports, vol. 7, no. 1, pp. 6738, 2017, ISSN: 2045-2322, (Number: 1 Publisher: Nature Publishing Group).
Abstract | Links | BibTeX | Tags: RESPONSEPixx, VIEWPixx3D
@article{mather_visual_2017,
title = {Visual adaptation alters the apparent speed of real-world actions},
author = {George Mather and Rebecca J. Sharman and Todd Parsons},
url = {https://www.nature.com/articles/s41598-017-06841-5},
doi = {10.1038/s41598-017-06841-5},
issn = {2045-2322},
year = {2017},
date = {2017-07-01},
urldate = {2024-01-18},
journal = {Scientific Reports},
volume = {7},
number = {1},
pages = {6738},
abstract = {The apparent physical speed of an object in the field of view remains constant despite variations in retinal velocity due to viewing conditions (velocity constancy). For example, people and cars appear to move across the field of view at the same objective speed regardless of distance. In this study a series of experiments investigated the visual processes underpinning judgements of objective speed using an adaptation paradigm and video recordings of natural human locomotion. Viewing a video played in slow-motion for 30 seconds caused participants to perceive subsequently viewed clips played at standard speed as too fast, so playback had to be slowed down in order for it to appear natural; conversely after viewing fast-forward videos for 30 seconds, playback had to be speeded up in order to appear natural. The perceived speed of locomotion shifted towards the speed depicted in the adapting video (‘re-normalisation’). Results were qualitatively different from those obtained in previously reported studies of retinal velocity adaptation. Adapting videos that were scrambled to remove recognizable human figures or coherent motion caused significant, though smaller shifts in apparent locomotion speed, indicating that both low-level and high-level visual properties of the adapting stimulus contributed to the changes in apparent speed.},
note = {Number: 1
Publisher: Nature Publishing Group},
keywords = {RESPONSEPixx, VIEWPixx3D},
pubstate = {published},
tppubtype = {article}
}
Kreyenmeier, Philipp; Fooken, Jolande; Spering, Miriam
Context effects on smooth pursuit and manual interception of a disappearing target Journal Article
In: Journal of Neurophysiology, vol. 118, no. 1, pp. 404–415, 2017, ISSN: 0022-3077.
Abstract | Links | BibTeX | Tags: PROPixx
@article{kreyenmeier_context_2017,
title = {Context effects on smooth pursuit and manual interception of a disappearing target},
author = {Philipp Kreyenmeier and Jolande Fooken and Miriam Spering},
url = {https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5501917/},
doi = {10.1152/jn.00217.2017},
issn = {0022-3077},
year = {2017},
date = {2017-07-01},
urldate = {2024-01-02},
journal = {Journal of Neurophysiology},
volume = {118},
number = {1},
pages = {404–415},
abstract = {In a novel track-intercept paradigm, human observers tracked a briefly shown object moving across a textured, dynamic context and intercepted it with their finger after it had disappeared. Context motion significantly affected eye and hand movement latency and speed, but not interception accuracy; eye and hand position at interception were correlated on a trial-by-trial basis. Visual context effects may be short-lasting, affecting movement trajectories more than movement end points., In our natural environment, we interact with moving objects that are surrounded by richly textured, dynamic visual contexts. Yet most laboratory studies on vision and movement show visual objects in front of uniform gray backgrounds. Context effects on eye movements have been widely studied, but it is less well known how visual contexts affect hand movements. Here we ask whether eye and hand movements integrate motion signals from target and context similarly or differently, and whether context effects on eye and hand change over time. We developed a track-intercept task requiring participants to track the initial launch of a moving object (“ball”) with smooth pursuit eye movements. The ball disappeared after a brief presentation, and participants had to intercept it in a designated “hit zone.” In two experiments (n = 18 human observers each), the ball was shown in front of a uniform or a textured background that either was stationary or moved along with the target. Eye and hand movement latencies and speeds were similarly affected by the visual context, but eye and hand interception (eye position at time of interception, and hand interception timing error) did not differ significantly between context conditions. Eye and hand interception timing errors were strongly correlated on a trial-by-trial basis across all context conditions, highlighting the close relation between these responses in manual interception tasks. Our results indicate that visual contexts similarly affect eye and hand movements but that these effects may be short-lasting, affecting movement trajectories more than movement end points., NEW & NOTEWORTHY In a novel track-intercept paradigm, human observers tracked a briefly shown object moving across a textured, dynamic context and intercepted it with their finger after it had disappeared. Context motion significantly affected eye and hand movement latency and speed, but not interception accuracy; eye and hand position at interception were correlated on a trial-by-trial basis. Visual context effects may be short-lasting, affecting movement trajectories more than movement end points.},
keywords = {PROPixx},
pubstate = {published},
tppubtype = {article}
}
Kalogeropoulou, Zampeta; Rolfs, Martin
Saccadic eye movements do not disrupt the deployment of feature-based attention Journal Article
In: Journal of Vision, vol. 17, no. 8, pp. 4, 2017, ISSN: 1534-7362.
Abstract | Links | BibTeX | Tags: VIEWPixx3D
@article{kalogeropoulou_saccadic_2017,
title = {Saccadic eye movements do not disrupt the deployment of feature-based attention},
author = {Zampeta Kalogeropoulou and Martin Rolfs},
url = {https://doi.org/10.1167/17.8.4},
doi = {10.1167/17.8.4},
issn = {1534-7362},
year = {2017},
date = {2017-07-01},
urldate = {2024-01-17},
journal = {Journal of Vision},
volume = {17},
number = {8},
pages = {4},
abstract = {The tight link of saccades to covert spatial attention has been firmly established, yet their relation to other forms of visual selection remains poorly understood. Here we studied the temporal dynamics of feature-based attention (FBA) during fixation and across saccades. Participants reported the orientation (on a continuous scale) of one of two sets of spatially interspersed Gabors (black or white). We tested performance at different intervals between the onset of a colored cue (black or white, indicating which stimulus was the most probable target; red: neutral condition) and the stimulus. FBA built up after cue onset: Benefits (errors for valid vs. neutral cues), costs (invalid vs. neutral), and the overall cueing effect (valid vs. invalid) increased with the cue–stimulus interval. Critically, we also tested visual performance at different intervals after a saccade, when FBA had been fully deployed before saccade initiation. Cueing effects were evident immediately after the saccade and were predicted most accurately and most precisely by fully deployed FBA, indicating that FBA was continuous throughout saccades. Finally, a decomposition of orientation reports into target reports and random guesses confirmed continuity of report precision and guess rates across the saccade. We discuss the role of FBA in perceptual continuity across saccades.},
keywords = {VIEWPixx3D},
pubstate = {published},
tppubtype = {article}
}
Papinutto, Michael; Lao, Junpeng; Ramon, Meike; Caldara, Roberto; Miellet, Sébastien
The Facespan—the perceptual span for face recognition Journal Article
In: Journal of Vision, vol. 17, no. 5, pp. 16, 2017, ISSN: 1534-7362.
Abstract | Links | BibTeX | Tags: VIEWPixx3D
@article{papinutto_facespanperceptual_2017,
title = {The Facespan—the perceptual span for face recognition},
author = {Michael Papinutto and Junpeng Lao and Meike Ramon and Roberto Caldara and Sébastien Miellet},
url = {https://doi.org/10.1167/17.5.16},
doi = {10.1167/17.5.16},
issn = {1534-7362},
year = {2017},
date = {2017-06-01},
urldate = {2024-01-18},
journal = {Journal of Vision},
volume = {17},
number = {5},
pages = {16},
abstract = {In reading, the perceptual span is a well-established concept that refers to the amount of information that can be read in a single fixation. Surprisingly, despite extensive empirical interest in determining the perceptual strategies deployed to process faces and an ongoing debate regarding the factors or mechanism(s) underlying efficient face processing, the perceptual span for faces—the Facespan—remains undetermined. To address this issue, we applied the gaze-contingent Spotlight technique implemented in an old-new face recognition paradigm. This procedure allowed us to parametrically vary the amount of facial information available at a fixated location in order to determine the minimal aperture size at which face recognition performance plateaus. As expected, accuracy increased nonlinearly with spotlight size apertures. Analyses of Structural Similarity comparing the available information during spotlight and natural viewing conditions indicate that the Facespan—the minimum spatial extent of preserved facial information leading to comparable performance as in natural viewing—encompasses 7° of visual angle in our viewing conditions (size of the face stimulus: 15.6°; viewing distance: 70 cm), which represents 45% of the face. The present findings provide a benchmark for future investigations that will address if and how the Facespan is modulated by factors such as cultural, developmental, idiosyncratic, or task-related differences.},
keywords = {VIEWPixx3D},
pubstate = {published},
tppubtype = {article}
}
Hugrass, Laila; Slavikova, Jana; Horvat, Melissa; Musawi, Alaa Al; Crewther, David
Temporal brightness illusion changes color perception of “the dress” Journal Article
In: Journal of Vision, vol. 17, no. 5, pp. 6, 2017, ISSN: 1534-7362.
Abstract | Links | BibTeX | Tags: DATAPixx, RESPONSEPixx, VPixxProgram
@article{hugrass_temporal_2017,
title = {Temporal brightness illusion changes color perception of “the dress”},
author = {Laila Hugrass and Jana Slavikova and Melissa Horvat and Alaa Al Musawi and David Crewther},
url = {https://doi.org/10.1167/17.5.6},
doi = {10.1167/17.5.6},
issn = {1534-7362},
year = {2017},
date = {2017-06-01},
urldate = {2023-12-21},
journal = {Journal of Vision},
volume = {17},
number = {5},
pages = {6},
abstract = {“The dress” has provoked intensive commentary among psychophysicists, especially in relation to color vision. Researchers have shown that manipulating illuminance cues can influence the perceived colors of the dress. Here we investigate whether illusory shifts in brightness can shift color perception of the dress. Drifting achromatic gratings with fast off and fast on shading profiles are known to give an illusion of brightening or darkening, respectively. We superimposed rotating sawtooth gratings on a series of dress images that morphed from extreme white/gold through to blue/black. In a sample of 18 adults (11 with white/gold dress percept and seven with blue/black percept), a two-alternative, forced-choice constant stimulus task measured the morphed image point at which each observer was equally likely to categorize the dress as white/gold or blue/black (the point of subjective equality or PSE). Despite manifest individual differences in the PSE, the two sawtooth temporal profiles consistently changed the perceived colors of the dress. Perceptual dimming shifted color categorization toward blue/black whereas perceptual brightening shifted color categorization toward white/gold. We conclude that color categorization is influenced substantially by illusory shifts in brightness.},
keywords = {DATAPixx, RESPONSEPixx, VPixxProgram},
pubstate = {published},
tppubtype = {article}
}
Leube, Alexander; Rifai, Katharina; Rifai, Katharina
Sampling rate influences saccade detection in mobile eye tracking of a reading task Journal Article
In: Journal of Eye Movement Research, vol. 10, no. 3, pp. 10.16910/jemr.10.3.3, 2017, ISSN: 1995-8692.
Abstract | Links | BibTeX | Tags: VIEWPixx3D
@article{leube_sampling_2017,
title = {Sampling rate influences saccade detection in mobile eye tracking of a reading task},
author = {Alexander Leube and Katharina Rifai and Katharina Rifai},
url = {https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7141092/},
doi = {10.16910/jemr.10.3.3},
issn = {1995-8692},
year = {2017},
date = {2017-06-01},
urldate = {2023-12-22},
journal = {Journal of Eye Movement Research},
volume = {10},
number = {3},
pages = {10.16910/jemr.10.3.3},
abstract = {The purpose of this study was to compare saccade detection characteristics in two mobile eye trackers with different sampling rates in a natural task. Gaze data of 11 participants were recorded in one 60 Hz and one 120 Hz mobile eye tracker and compared directly to the saccades detected by a 1000 HZ stationary tracker while a reading task was performed. Saccades and fixations were detected using a velocity based algorithm and their properties analyzed. Results showed that there was no significant difference in the number of detected fixations but mean fixation durations differed between the 60 Hz mobile and the stationary eye tracker. The 120 Hz mobile eye tracker showed a significant increase in the detection rate of saccades and an improved estimation of the mean saccade duration, compared to the 60 Hz eye tracker. To conclude, for the detection and analysis of fast eye movements, such as saccades, it is better to use a 120 Hz mobile eye tracker.},
keywords = {VIEWPixx3D},
pubstate = {published},
tppubtype = {article}
}
Wolf, Christian; Schütz, Alexander C.
Earlier saccades to task-relevant targets irrespective of relative gain between peripheral and foveal information Journal Article
In: Journal of Vision, vol. 17, no. 6, pp. 21, 2017, ISSN: 1534-7362.
Abstract | Links | BibTeX | Tags: VIEWPixx
@article{wolf_earlier_2017,
title = {Earlier saccades to task-relevant targets irrespective of relative gain between peripheral and foveal information},
author = {Christian Wolf and Alexander C. Schütz},
url = {https://doi.org/10.1167/17.6.21},
doi = {10.1167/17.6.21},
issn = {1534-7362},
year = {2017},
date = {2017-06-01},
urldate = {2024-01-02},
journal = {Journal of Vision},
volume = {17},
number = {6},
pages = {21},
abstract = {Saccades bring objects of interest onto the fovea for high-acuity processing. Saccades to rewarded targets show shorter latencies that correlate negatively with expected motivational value. Shorter latencies are also observed when the saccade target is relevant for a perceptual discrimination task. Here we tested whether saccade preparation is equally influenced by informational value as it is by motivational value. We defined informational value as the probability that information is task-relevant times the ratio between postsaccadic foveal and presaccadic peripheral discriminability. Using a gaze-contingent display, we independently manipulated peripheral and foveal discriminability of the saccade target. Latencies of saccades with perceptual task were reduced by 36 ms in general, but they were not modulated by the information saccades provide (Experiments 1 and 2). However, latencies showed a clear negative linear correlation with the probability that the target is task-relevant (Experiment 3). We replicated that the facilitation by a perceptual task is spatially specific and not due to generally heightened arousal (Experiment 4). Finally, the facilitation only emerged when the perceptual task is in the visual but not in the auditory modality (Experiment 5). Taken together, these results suggest that saccade latencies are not equally modulated by informational value as by motivational value. The facilitation by a perceptual task only arises when task-relevant visual information is foveated, irrespective of whether the foveation is useful or not.},
keywords = {VIEWPixx},
pubstate = {published},
tppubtype = {article}
}
Brown, Alyse C.; Crewther, David P.
In: Frontiers in Human Neuroscience, vol. 11, pp. 239, 2017, ISSN: 1662-5161.
Abstract | Links | BibTeX | Tags: DATAPixx, VPixxProgram
@article{brown_autistic_2017,
title = {Autistic Children Show a Surprising Relationship between Global Visual Perception, Non-Verbal Intelligence and Visual Parvocellular Function, Not Seen in Typically Developing Children},
author = {Alyse C. Brown and David P. Crewther},
url = {https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5425824/},
doi = {10.3389/fnhum.2017.00239},
issn = {1662-5161},
year = {2017},
date = {2017-05-01},
urldate = {2023-12-21},
journal = {Frontiers in Human Neuroscience},
volume = {11},
pages = {239},
abstract = {Despite much current research into the visual processing style of individuals with Autism Spectrum Disorder (ASD), understanding of the neural mechanisms is lagging, especially with respect to the contributions of the overlapping dichotomies of magnocellular/parvocellular (afferent neural pathways), global/local (perception) and dorsal/ventral (cortical streams). Here, we addressed this deficiency by measuring inspection times (ITs) for novel global/local stimuli as well as recording nonlinear visually evoked potentials (VEPs), in particular, magnocellular and parvocellular temporal efficiencies. The study was conducted on a group of male ASD children and a typically developing (TD) group matched for mean age and mean non-verbal intelligence, as measured by the Raven’s Progressive Matrices. The IT results did not differ between groups, however a negative correlation between global IT and Raven’s score was found in the ASD group, that was not evident in the TD group. Nonlinear VEP showed the ASD group had smaller amplitude parvocellular-generated second order responses compared to the TD group. This is a sign of improved temporal responsiveness in ASD vs. TD groups. Principal Component Analysis linked global IT, non-verbal intelligence scores and VEP parvocellular efficiency in a single factor for the ASD but not the TD group. The results are suggestive of a constraint on pathways available for cognitive response in the ASD group, with temporal processing for those with ASD becoming more reliant on the parvocellular pathway.},
keywords = {DATAPixx, VPixxProgram},
pubstate = {published},
tppubtype = {article}
}
Smith, Anika K.; Wade, Alex R.; Penkman, Kirsty E. H.; Baker, Daniel H.
Dietary modulation of cortical excitation and inhibition Journal Article
In: Journal of Psychopharmacology, vol. 31, no. 5, pp. 632–637, 2017, ISSN: 0269-8811, (Publisher: SAGE Publications Ltd STM).
Abstract | Links | BibTeX | Tags: VIEWPixx
@article{smith_dietary_2017,
title = {Dietary modulation of cortical excitation and inhibition},
author = {Anika K. Smith and Alex R. Wade and Kirsty E. H. Penkman and Daniel H. Baker},
url = {https://doi.org/10.1177/0269881117699613},
doi = {10.1177/0269881117699613},
issn = {0269-8811},
year = {2017},
date = {2017-05-01},
urldate = {2024-01-02},
journal = {Journal of Psychopharmacology},
volume = {31},
number = {5},
pages = {632–637},
abstract = {The balance of excitatory and inhibitory neurotransmitters in the brain affects both neural responses and behaviour in humans and animals. Here we investigated whether dietary intervention aimed at increasing levels of the inhibitory neurotransmitter gamma-aminobutyric acid (GABA) can influence neural responses to basic sensory stimuli. Using a steady-state electroencephalography (EEG) paradigm, we found that the neural response to visual patterns was reduced in individuals who consumed a yeast extract product rich in substances associated with the production of GABA (glutamate and B vitamins), but not in a control group who consumed a placebo substance (n = 14 per group). This demonstrates that the balance of excitation and inhibition in the brain can be influenced by dietary interventions, suggesting possible clinical benefits in conditions (e.g. epilepsy) where inhibition is abnormal.},
note = {Publisher: SAGE Publications Ltd STM},
keywords = {VIEWPixx},
pubstate = {published},
tppubtype = {article}
}