Open Access Publications
From research on the visual systems of turtles, to the perception of faces with or without makeup, to transaccadic perception and perceptual cycles in the brain– VPixx hardware and software solutions have supported research in vision science and beyond for over 20 years. We are immensely proud of the discoveries and accomplishments of our customers across the world.
On this page you will find a non-exhaustive list of peer-reviewed, open access publications citing VPixx tools dating back to 2003. Browse the list or use the tag filter to search for specific products. Note that we report the device used in the paper according to the authors; this may not accurately reflect the specific model of device used (e.g., VIEWPixx vs. VIEWPixx /3D). Nor do we guarantee the accuracy of published content. Please contact our team at [email protected] if you have any questions about a specific paper.
Curious about a specific application of our tools? Can’t find what you are looking for? Our staff scientists are happy to discuss paradigms and protocols using our equipment by email or video chat. Please contact us with your questions.
Want to have your work added to our library? Send us a message at [email protected] and we will add it. Your article must be peer-reviewed, open access, and it must indicate VPixx products were used in the research.
Use the search tool below to search for specific terms among the titles, authors and abstracts in our library.
Huber-Huber, Christoph; Melcher, David
Saccade execution increases the preview effect with faces: An EEG and eye-tracking coregistration study Journal Article
In: Attention, Perception, & Psychophysics, 2023, ISSN: 1943-393X.
Abstract | Links | BibTeX | Tags: VIEWPixxEEG
@article{huber-huber_saccade_2023,
title = {Saccade execution increases the preview effect with faces: An EEG and eye-tracking coregistration study},
author = {Christoph Huber-Huber and David Melcher},
url = {https://doi.org/10.3758/s13414-023-02802-5},
doi = {10.3758/s13414-023-02802-5},
issn = {1943-393X},
year = {2023},
date = {2023-11-01},
urldate = {2023-12-21},
journal = {Attention, Perception, & Psychophysics},
abstract = {Under naturalistic viewing conditions, humans conduct about three to four saccadic eye movements per second. These dynamics imply that in real life, humans rarely see something completely new; there is usually a preview of the upcoming foveal input from extrafoveal regions of the visual field. In line with results from the field of reading research, we have shown with EEG and eye-tracking coregistration that an extrafoveal preview also affects postsaccadic visual object processing and facilitates discrimination. Here, we ask whether this preview effect in the fixation-locked N170, and in manual responses to the postsaccadic target face (tilt discrimination), requires saccade execution. Participants performed a gaze-contingent experiment in which extrafoveal face images could change their orientation during a saccade directed to them. In a control block, participants maintained stable gaze throughout the experiment and the extrafoveal face reappeared foveally after a simulated saccade latency. Compared with this no-saccade condition, the neural and the behavioral preview effects were much larger in the saccade condition. We also found shorter first fixation durations after an invalid preview, which is in contrast to reading studies. We interpret the increased preview effect under saccade execution as the result of the additional sensorimotor processes that come with gaze behavior compared with visual perception under stable fixation. In addition, our findings call into question whether EEG studies with fixed gaze capture key properties and dynamics of active, natural vision.},
keywords = {VIEWPixxEEG},
pubstate = {published},
tppubtype = {article}
}
Kim, June Hee; Yin, Christine; Merriam, Elisha P.; Roth, Zvi N.
Pupil Size Is Sensitive to Low-Level Stimulus Features, Independent of Arousal-Related Modulation Journal Article
In: eNeuro, vol. 10, no. 10, 2023, ISSN: 2373-2822, (Publisher: Society for Neuroscience Section: Research Article: New Research).
Abstract | Links | BibTeX | Tags: VIEWPixx3D
@article{kim_pupil_2023,
title = {Pupil Size Is Sensitive to Low-Level Stimulus Features, Independent of Arousal-Related Modulation},
author = {June Hee Kim and Christine Yin and Elisha P. Merriam and Zvi N. Roth},
url = {https://www.eneuro.org/content/10/10/ENEURO.0005-23.2023},
doi = {10.1523/ENEURO.0005-23.2023},
issn = {2373-2822},
year = {2023},
date = {2023-10-01},
urldate = {2024-01-17},
journal = {eNeuro},
volume = {10},
number = {10},
abstract = {Visual Abstract <img class="highwire-fragment fragment-image" alt="Figure" src="https://www.eneuro.org/content/eneuro/10/10/ENEURO.0005-23.2023/F1.medium.gif" width="440" height="243"/>Download figureOpen in new tabDownload powerpoint
Similar to a camera aperture, pupil size adjusts to the surrounding luminance. Unlike a camera, pupil size is additionally modulated both by stimulus properties and by cognitive processes, including attention and arousal, though the interdependence of these factors is unclear. We hypothesized that different stimulus properties interact to jointly modulate pupil size while remaining independent from the impact of arousal. We measured pupil responses from human observers to equiluminant stimuli during a demanding rapid serial visual presentation (RSVP) task at fixation and tested how response amplitude depends on contrast, spatial frequency, and reward level. We found that under constant luminance, unattended stimuli evoke responses that are separable from changes caused by general arousal or attention. We further uncovered a double-dissociation between task-related responses and stimulus-evoked responses, suggesting that different sources of pupil size modulation are independent of one another. Our results shed light on neural pathways underlying pupillary response.},
note = {Publisher: Society for Neuroscience
Section: Research Article: New Research},
keywords = {VIEWPixx3D},
pubstate = {published},
tppubtype = {article}
}
Similar to a camera aperture, pupil size adjusts to the surrounding luminance. Unlike a camera, pupil size is additionally modulated both by stimulus properties and by cognitive processes, including attention and arousal, though the interdependence of these factors is unclear. We hypothesized that different stimulus properties interact to jointly modulate pupil size while remaining independent from the impact of arousal. We measured pupil responses from human observers to equiluminant stimuli during a demanding rapid serial visual presentation (RSVP) task at fixation and tested how response amplitude depends on contrast, spatial frequency, and reward level. We found that under constant luminance, unattended stimuli evoke responses that are separable from changes caused by general arousal or attention. We further uncovered a double-dissociation between task-related responses and stimulus-evoked responses, suggesting that different sources of pupil size modulation are independent of one another. Our results shed light on neural pathways underlying pupillary response.
Kryklywy, James H.; Forys, Brandon J.; Vieira, Joana B.; Quinlan, Derek J.; Mitchell, Derek G. V.
Dissociating representations of affect and motion in visual cortices Journal Article
In: Cognitive, Affective, & Behavioral Neuroscience, vol. 23, no. 5, pp. 1322–1345, 2023, ISSN: 1531-135X.
Abstract | Links | BibTeX | Tags: VPixxProgram
@article{kryklywy_dissociating_2023,
title = {Dissociating representations of affect and motion in visual cortices},
author = {James H. Kryklywy and Brandon J. Forys and Joana B. Vieira and Derek J. Quinlan and Derek G. V. Mitchell},
url = {https://doi.org/10.3758/s13415-023-01115-2},
doi = {10.3758/s13415-023-01115-2},
issn = {1531-135X},
year = {2023},
date = {2023-10-01},
urldate = {2024-01-02},
journal = {Cognitive, Affective, & Behavioral Neuroscience},
volume = {23},
number = {5},
pages = {1322–1345},
abstract = {While a delicious dessert being presented to us may elicit strong feelings of happiness and excitement, the same treat falling slowly away can lead to sadness and disappointment. Our emotional response to the item depends on its visual motion direction. Despite this importance, it remains unclear whether (and how) cortical areas devoted to decoding motion direction represents or integrates emotion with perceived motion direction. Motion-selective visual area V5/MT+ sits, both functionally and anatomically, at the nexus of dorsal and ventral visual streams. These pathways, however, differ in how they are modulated by emotional cues. The current study was designed to disentangle how emotion and motion perception interact, as well as use emotion-dependent modulation of visual cortices to understand the relation of V5/MT+ to canonical processing streams. During functional magnetic resonance imaging (fMRI), approaching, receding, or static motion after-effects (MAEs) were induced on stationary positive, negative, and neutral stimuli. An independent localizer scan was conducted to identify the visual-motion area V5/MT+. Through univariate and multivariate analyses, we demonstrated that emotion representations in V5/MT+ share a more similar response profile to that observed in ventral visual than dorsal, visual structures. Specifically, V5/MT+ and ventral structures were sensitive to the emotional content of visual stimuli, whereas dorsal visual structures were not. Overall, this work highlights the critical role of V5/MT+ in the representation and processing of visually acquired emotional content. It further suggests a role for this region in utilizing affectively salient visual information to augment motion perception of biologically relevant stimuli.},
keywords = {VPixxProgram},
pubstate = {published},
tppubtype = {article}
}
Hussain, Sana; Menchaca, Isaac; Shalchy, Mahsa Alizadeh; Yaghoubi, Kimia; Langley, Jason; Seitz, Aaron R.; Hu, Xiaoping P.; Peters, Megan A. K.
Locus coeruleus integrity predicts ease of attaining and maintaining neural states of high attentiveness Journal Article
In: Brain Research Bulletin, vol. 202, pp. 110733, 2023, ISSN: 0361-9230.
Abstract | Links | BibTeX | Tags: TRACKPixx3
@article{hussain_locus_2023,
title = {Locus coeruleus integrity predicts ease of attaining and maintaining neural states of high attentiveness},
author = {Sana Hussain and Isaac Menchaca and Mahsa Alizadeh Shalchy and Kimia Yaghoubi and Jason Langley and Aaron R. Seitz and Xiaoping P. Hu and Megan A. K. Peters},
url = {https://www.sciencedirect.com/science/article/pii/S0361923023001582},
doi = {10.1016/j.brainresbull.2023.110733},
issn = {0361-9230},
year = {2023},
date = {2023-10-01},
urldate = {2023-12-21},
journal = {Brain Research Bulletin},
volume = {202},
pages = {110733},
abstract = {The locus coeruleus (LC), a small subcortical structure in the brainstem, is the brain’s principal source of norepinephrine. It plays a primary role in regulating stress, the sleep-wake cycle, and attention, and its degradation is associated with aging and neurodegenerative diseases associated with cognitive deficits (e.g., Parkinson’s, Alzheimer’s). Yet precisely how norepinephrine drives brain networks to support healthy cognitive function remains poorly understood – partly because LC’s small size makes it difficult to study noninvasively in humans. Here, we characterized LC’s influence on brain dynamics using a hidden Markov model fitted to functional neuroimaging data from healthy young adults across four attention-related brain networks and LC. We modulated LC activity using a behavioral paradigm and measured individual differences in LC magnetization transfer contrast. The model revealed five hidden states, including a stable state dominated by salience-network activity that occurred when subjects actively engaged with the task. LC magnetization transfer contrast correlated with this state’s stability across experimental manipulations and with subjects’ propensity to enter into and remain in this state. These results provide new insight into LC’s role in driving spatiotemporal neural patterns associated with attention, and demonstrate that variation in LC integrity can explain individual differences in these patterns even in healthy young adults.},
keywords = {TRACKPixx3},
pubstate = {published},
tppubtype = {article}
}
Taveras-Cruz, Yesenia; Jr., Rhea T. Eskew
Threshold versus intensity curves measured with a new high-brightness display system Journal Article
In: Journal of Vision, vol. 23, no. 11, pp. 71, 2023, ISSN: 1534-7362.
Abstract | Links | BibTeX | Tags: PROPixx
@article{taveras-cruz_threshold_2023,
title = {Threshold versus intensity curves measured with a new high-brightness display system},
author = {Yesenia Taveras-Cruz and Rhea T. Eskew Jr.},
url = {https://doi.org/10.1167/jov.23.11.71},
doi = {10.1167/jov.23.11.71},
issn = {1534-7362},
year = {2023},
date = {2023-09-01},
urldate = {2023-12-21},
journal = {Journal of Vision},
volume = {23},
number = {11},
pages = {71},
abstract = {Classical threshold vs. intensity (tvi) curves were measured using optical systems and were generally limited to increment test stimuli and relatively simple spatial patterns. Modern displays provide more flexibility in terms of stimuli spatial profiles but are usually dim enough that there may be rod intrusion when measuring cone responses. Here we describe a high-brightness display system and present tvi’s for increment and decrement achromatic tests. The system consists of a PROPixx three-chip DLP LED color projector (VPixx Technologies, Saint-Bruno, Canada) controlled via a Datapixx display driver, with 12-bit digital to analog conversion per RGB channel. Light from the projector is collected in a large diameter lens and focused on high gain rear projection screen. Retinal illuminance of the background may be varied in three ways: (a) varying the mean current supplied to the LEDs from the controller (adjustable in software); (b) using calibrated neutral density filters mounted near the eye; and (c) changing the midpoint of the RGB channels in software (e.g., making the white background as R=G=B=0.1 instead of 0.5). Method (c) is made easier by the fact that the PROPixx “gamma curve” is linear, which also means that no RGB bits are lost to gamma correction. We will show thresholds for achromatic tests on a white background varying from 0.56 to 4.03 log trolands, with preliminary results suggesting differences in the tvi curves between the increment and decrement tests.},
keywords = {PROPixx},
pubstate = {published},
tppubtype = {article}
}
Marijan, Aleksandra; Mestre, Clara; Candy, T Rowan; Bonnen, Kathryn
The Role of Prediction During Continuous Visual Tracking in 3D Environments Journal Article
In: Journal of Vision, vol. 23, no. 9, pp. 5601, 2023, ISSN: 1534-7362.
Abstract | Links | BibTeX | Tags: 3DPolarizer, PROPixx
@article{marijan_role_2023,
title = {The Role of Prediction During Continuous Visual Tracking in 3D Environments},
author = {Aleksandra Marijan and Clara Mestre and T Rowan Candy and Kathryn Bonnen},
url = {https://doi.org/10.1167/jov.23.9.5601},
doi = {10.1167/jov.23.9.5601},
issn = {1534-7362},
year = {2023},
date = {2023-08-01},
urldate = {2023-12-21},
journal = {Journal of Vision},
volume = {23},
number = {9},
pages = {5601},
abstract = {In everyday life, prediction plays a critical role in ocular motor target tracking. The ocular motor system employs a mixture of saccades and smooth pursuit across version and vergence eye movements to successfully follow objects as they move in the world. However, these eye movements are most often studied in isolation, saccades separate from smooth pursuit, version separate from vergence. Here we examined the perception/prediction of motion trajectories and how different types of eye movements are employed to coordinate the ocular motor tracking of those targets. Eye movements were recorded with an Eyelink 1000 (SR Research) at 500 Hz. The stimuli were presented using a PROPixx projector (VPixx Technologies) and an active circular polarizer, with subjects wearing passive circular polarizing glasses. The screen was set at a viewing distance of 70 cm. Cartoon images of angular size 2.2° moved with horizontal trajectories (initiating version eye movements) or motion-in-depth trajectories (initiating convergence and divergence). The motion trajectories in the predictable condition were sinusoids of varying amplitudes (5, 10 and 20 cm) and temporal frequencies (.25, .5, and 1 Hz). The unpredictable trajectories were smoothed Brownian random walks in position (sigma = 0.1 cm, 0.2 cm, and 0.3 cm). We measured the number of saccades across all conditions. In the horizontal motion condition, there were systematic increases in the number of saccades with increasing trajectory amplitudes and temporal frequency. For the motion-in-depth condition, participants made similar numbers of saccades regardless of the trajectory amplitude and frequency. There were no consistent differences between saccade behavior in the predictable vs. unpredictable conditions. A cross-correlogram analysis of the unpredictable trajectory condition revealed a longer latency (µ=77ms},
keywords = {3DPolarizer, PROPixx},
pubstate = {published},
tppubtype = {article}
}
Charbonneau, Isabelle; Ledrou-Paquet, Vicki; Blais, Caroline; Fiset, Daniel
Does Observers’ Ethnicity Influence Visual Strategies for Gender and Expressiveness Judgments ? Journal Article
In: Journal of Vision, vol. 23, no. 9, pp. 4976, 2023, ISSN: 1534-7362.
Abstract | Links | BibTeX | Tags: Pack&Go
@article{charbonneau_does_2023,
title = {Does Observers’ Ethnicity Influence Visual Strategies for Gender and Expressiveness Judgments ?},
author = {Isabelle Charbonneau and Vicki Ledrou-Paquet and Caroline Blais and Daniel Fiset},
url = {https://doi.org/10.1167/jov.23.9.4976},
doi = {10.1167/jov.23.9.4976},
issn = {1534-7362},
year = {2023},
date = {2023-08-01},
urldate = {2023-12-21},
journal = {Journal of Vision},
volume = {23},
number = {9},
pages = {4976},
abstract = {Recent advances in cross-cultural studies emphasized the importance of including diversified groups of participants to better understand mechanisms underlying various face processing abilities, whether they reveal a difference or not (e.g. Blais et al., 2021). In visual psychophysics, little is known about visual strategies underlying face perception among Black observers. Therefore, we investigated visual strategies in Black and White participants in a Gender and Expressiveness (ExNex) tasks, using a newly validated platform, Pack & Go by VPIXX, which allows high quality psychophysic testing online. Sixty participants (15 Blacks and 45 Whites) completed both experiments (4000 trials per participant) conducted using the Bubble’s technique (Gosselin & Schyns, 2001) which samples visual information on a trial-by-trial basis using small gaussian windows in order to reveal the most useful information in any visual task. Accuracy was maintained at 75% by adjusting online the number of bubbles using QUEST (Watson & Pelli, 1983). Group performance levels were controlled by matching individual Black participants with White participants according to their final average number of bubbles in both tasks. Classification images were produced by calculating a weighted sum of the bubbles mask, using the trial-by-trial accuracy transformed into z-scores as weights. Pixel tests from the Stat4CI (Chauvin et al., 2005) toolbox revealed significant pixels associated with performance (p< .05; Zcrit = 4.05). Mainly, both groups made use of the same visual information for both race stimuli, that is, reliance on the eye in the Gender task and on the mouth in the ExNex task. Interestingly, Black participants also relied significantly more on the left eye for ExNex judgements, but only with Black stimuli. These differences and similarities in visual strategies for Black and White observers, will be discussed regarding cross-cultural differences in face perception in general.},
keywords = {Pack&Go},
pubstate = {published},
tppubtype = {article}
}
Taveras-Cruz, Yesenia; Sehgal, Aanya; Eskew, Jr. Rhea T.
Increment and decrement threshold vs. intensity curves for achromatic and L-cone tests. Journal Article
In: Journal of Vision, vol. 23, no. 9, pp. 5729, 2023, ISSN: 1534-7362.
Abstract | Links | BibTeX | Tags: PROPixx
@article{taveras-cruz_increment_2023,
title = {Increment and decrement threshold vs. intensity curves for achromatic and L-cone tests.},
author = {Yesenia Taveras-Cruz and Aanya Sehgal and Jr. Rhea T. Eskew},
url = {https://doi.org/10.1167/jov.23.9.5729},
doi = {10.1167/jov.23.9.5729},
issn = {1534-7362},
year = {2023},
date = {2023-08-01},
urldate = {2023-12-21},
journal = {Journal of Vision},
volume = {23},
number = {9},
pages = {5729},
abstract = {The desensitization of the visual system as a function of the increasing luminance of a background field yields threshold vs. intensity (tvi) curves, classically measured using increment tests. Here we use a new, high-brightness display system to measure both increment and decrement thresholds. Our display system is based upon a PROPixx three-chip DLP LED color projector (VPixx Technologies, Saint-Bruno, Canada), with light from the projector collected into a field lens and focused onto a high gain rear projection screen. This display combines the brightness of traditional optical systems with the flexibility of control provided by modern displays; in particular, it is simple to use the silent substitution method to isolate single cone types. Here we report tvi curves for achromatic and (L-)ong wavelength sensitive cone isolating tests, measured using method of adjustment. Selected thresholds were verified with a spatial, two-alternative forced-choice procedure. The adapting background was white, with luminances ranging from 0.6 to 4.0 log Trolands (a maximum near 3200 cd/m2, bleaching about 1/3 of the L and M cone pigment). Our observers are slightly more sensitive to decrements than increments (about 0.1 log units), for both achromatic and L-cone tests, and to L-cone tests than to achromatic tests (about 0.6 log cone contrast units), over the entire adapting range. Both increment and decrement thresholds follow the Stiles template, approximating Weber’s law except at the lowest adapting levels. The achromatic tvi’s, for both increment and decrement tests, are, on average, slightly steeper than the L-cone tvi’s. In addition, decrement tvi’s are steeper than the increment tvi’s, indicating greater effects of light adaptation for the decrements, which may be due to differences in the effects of light adaptation in ON and OFF pathways.},
keywords = {PROPixx},
pubstate = {published},
tppubtype = {article}
}
Barrionuevo, Pablo A.; Schütz, Alexander C.; Gegenfurtner, Karl R.
The role of rod and cone signals in mesopic brightness induction Journal Article
In: Journal of Vision, vol. 23, no. 9, pp. 5330, 2023, ISSN: 1534-7362.
Abstract | Links | BibTeX | Tags: PROPixxMultispectral
@article{barrionuevo_role_2023,
title = {The role of rod and cone signals in mesopic brightness induction},
author = {Pablo A. Barrionuevo and Alexander C. Schütz and Karl R. Gegenfurtner},
url = {https://doi.org/10.1167/jov.23.9.5330},
doi = {10.1167/jov.23.9.5330},
issn = {1534-7362},
year = {2023},
date = {2023-08-01},
urldate = {2023-12-21},
journal = {Journal of Vision},
volume = {23},
number = {9},
pages = {5330},
abstract = {The perception of brightness is usually assessed under photopic viewing conditions, where vision is mediated by cone photoreceptors. However, under mesopic viewing conditions, rods have a profound effect on brightness perception. Here, we investigated how isolated cone and rod signals contribute to two spatial brightness induction phenomena: assimilation and contrast. We used an ad-hoc tetrachromatic projector (VPixx Technologies Inc.) and the silent substitution method to display isolated rod (Cmax = 72%) and cone (Cmax = 52%) stimuli. The stimuli consisted of White’s Illusion (assimilation) and Simultaneous Brightness Contrast displays, with embedded matching and reference patches. The brighter patch had to be chosen by participants in a constant stimuli paradigm. The normalized difference between the patches’ excitations at the point of subjective equality defined the induction effect. The stimuli were presented monocularly at 6.5° from the fovea. In a second experiment with a trichromatic monitor (no photoreceptor isolation), we analyzed the effect of light adaptation from scotopic to photopic conditions. For the rod condition, the induction effect was significantly higher for assimilation than for contrast. However, the assimilation and contrast effects were similar for the cone condition. These results mimicked the increase of assimilation with decreasing light levels from photopic to scotopic viewing conditions. In a subsidiary experiment, we tested the low-light-level-poor-spatial-resolution hypothesis: smoothed stimuli would maintain the same induction effect across light levels. However, we found similar trends with and without smoothing, rejecting this alternative hypothesis. Rod vision enhanced brightness assimilation, but it did not affect brightness contrast when compared to cone vision. These results indicate that brightness contrast and assimilation have different origins and highlight the important role of low-level mechanisms in brightness perception.},
keywords = {PROPixxMultispectral},
pubstate = {published},
tppubtype = {article}
}
Gingras, Francis; Duncan, Justin; Gosselin, Frédéric; Fiset, Daniel; Blais, Caroline
Using Online Testing to Measure Spatial Frequency and Orientation Tuning in Face Processing Journal Article
In: Journal of Vision, vol. 23, no. 9, pp. 5028, 2023, ISSN: 1534-7362.
Links | BibTeX | Tags: Pack&Go
@article{gingras_using_2023,
title = {Using Online Testing to Measure Spatial Frequency and Orientation Tuning in Face Processing},
author = {Francis Gingras and Justin Duncan and Frédéric Gosselin and Daniel Fiset and Caroline Blais},
url = {https://jov.arvojournals.org/article.aspx?articleid=2791488},
doi = {10.1167/jov.23.9.5028},
issn = {1534-7362},
year = {2023},
date = {2023-08-01},
urldate = {2023-12-21},
journal = {Journal of Vision},
volume = {23},
number = {9},
pages = {5028},
keywords = {Pack&Go},
pubstate = {published},
tppubtype = {article}
}
Saumure, Camille; Plouffe-Demers, Marie-Pier; Fiset, Daniel; Cormier, Stéphanie; Zhang, Ye; Sun, Dan; Feng, Manni; Luo, Feifan; Kunz, Miriam; Blais, Caroline
In: Affective Science, vol. 4, no. 2, pp. 332–349, 2023, ISSN: 2662-205X.
Abstract | Links | BibTeX | Tags: Pack&Go
@article{saumure_differences_2023,
title = {Differences Between East Asians and Westerners in the Mental Representations and Visual Information Extraction Involved in the Decoding of Pain Facial Expression Intensity},
author = {Camille Saumure and Marie-Pier Plouffe-Demers and Daniel Fiset and Stéphanie Cormier and Ye Zhang and Dan Sun and Manni Feng and Feifan Luo and Miriam Kunz and Caroline Blais},
url = {https://doi.org/10.1007/s42761-023-00186-1},
doi = {10.1007/s42761-023-00186-1},
issn = {2662-205X},
year = {2023},
date = {2023-06-01},
urldate = {2024-01-26},
journal = {Affective Science},
volume = {4},
number = {2},
pages = {332–349},
abstract = {Effectively communicating pain is crucial for human beings. Facial expressions are one of the most specific forms of behavior associated with pain, but the way culture shapes expectations about the intensity with which pain is typically facially conveyed, and the visual strategies deployed to decode pain intensity in facial expressions, is poorly understood. The present study used a data-driven approach to compare two cultures, namely East Asians and Westerners, with respect to their mental representations of pain facial expressions (experiment 1},
keywords = {Pack&Go},
pubstate = {published},
tppubtype = {article}
}
Ferrante, Oscar; Zhigalov, Alexander; Hickey, Clayton; Jensen, Ole
Statistical Learning of Distractor Suppression Downregulates Prestimulus Neural Excitability in Early Visual Cortex Journal Article
In: Journal of Neuroscience, vol. 43, no. 12, pp. 2190–2198, 2023, ISSN: 0270-6474, 1529-2401, (Publisher: Society for Neuroscience Section: Research Articles).
Abstract | Links | BibTeX | Tags: PROPixx
@article{ferrante_statistical_2023,
title = {Statistical Learning of Distractor Suppression Downregulates Prestimulus Neural Excitability in Early Visual Cortex},
author = {Oscar Ferrante and Alexander Zhigalov and Clayton Hickey and Ole Jensen},
url = {https://www.jneurosci.org/content/43/12/2190},
doi = {10.1523/JNEUROSCI.1703-22.2022},
issn = {0270-6474, 1529-2401},
year = {2023},
date = {2023-03-01},
urldate = {2024-01-31},
journal = {Journal of Neuroscience},
volume = {43},
number = {12},
pages = {2190–2198},
abstract = {Visual attention is highly influenced by past experiences. Recent behavioral research has shown that expectations about the spatial location of distractors within a search array are implicitly learned, with expected distractors becoming less interfering. Little is known about the neural mechanism supporting this form of statistical learning. Here, we used magnetoencephalography (MEG) to measure human brain activity to test whether proactive mechanisms are involved in the statistical learning of distractor locations. Specifically, we used a new technique called rapid invisible frequency tagging (RIFT) to assess neural excitability in early visual cortex during statistical learning of distractor suppression while concurrently investigating the modulation of posterior alpha band activity (8–12 Hz). Male and female human participants performed a visual search task in which a target was occasionally presented alongside a color-singleton distractor. Unbeknown to the participants, the distracting stimuli were presented with different probabilities across the two hemifields. RIFT analysis showed that early visual cortex exhibited reduced neural excitability in the prestimulus interval at retinotopic locations associated with higher distractor probabilities. In contrast, we did not find any evidence of expectation-driven distractor suppression in alpha band activity. These findings indicate that proactive mechanisms of attention are involved in predictive distractor suppression and that these mechanisms are associated with altered neural excitability in early visual cortex. Moreover, our findings indicate that RIFT and alpha band activity might subtend different and possibly independent attentional mechanisms.
SIGNIFICANCE STATEMENT What we experienced in the past affects how we perceive the external world in the future. For example, an annoying flashing light might be better ignored if we know in advance where it usually appears. This ability of extracting regularities from the environment is called statistical learning. In this study, we explore the neuronal mechanisms allowing the attentional system to overlook items that are unequivocally distracting based on their spatial distribution. By recording brain activity using MEG while probing neural excitability with a novel technique called RIFT, we show that the neuronal excitability in early visual cortex is reduced in advance of stimulus presentation for locations where distracting items are more likely to occur.},
note = {Publisher: Society for Neuroscience
Section: Research Articles},
keywords = {PROPixx},
pubstate = {published},
tppubtype = {article}
}
SIGNIFICANCE STATEMENT What we experienced in the past affects how we perceive the external world in the future. For example, an annoying flashing light might be better ignored if we know in advance where it usually appears. This ability of extracting regularities from the environment is called statistical learning. In this study, we explore the neuronal mechanisms allowing the attentional system to overlook items that are unequivocally distracting based on their spatial distribution. By recording brain activity using MEG while probing neural excitability with a novel technique called RIFT, we show that the neuronal excitability in early visual cortex is reduced in advance of stimulus presentation for locations where distracting items are more likely to occur.
Seijdel, Noor; Marshall, Tom R; Drijvers, Linda
Rapid invisible frequency tagging (RIFT): a promising technique to study neural and cognitive processing using naturalistic paradigms Journal Article
In: Cerebral Cortex, vol. 33, no. 5, pp. 1626–1629, 2023, ISSN: 1047-3211.
Abstract | Links | BibTeX | Tags: PROPixx
@article{seijdel_rapid_2023,
title = {Rapid invisible frequency tagging (RIFT): a promising technique to study neural and cognitive processing using naturalistic paradigms},
author = {Noor Seijdel and Tom R Marshall and Linda Drijvers},
url = {https://doi.org/10.1093/cercor/bhac160},
doi = {10.1093/cercor/bhac160},
issn = {1047-3211},
year = {2023},
date = {2023-03-01},
urldate = {2024-01-31},
journal = {Cerebral Cortex},
volume = {33},
number = {5},
pages = {1626–1629},
abstract = {Frequency tagging has been successfully used to investigate selective stimulus processing in electroencephalography (EEG) or magnetoencephalography (MEG) studies. Recently, new projectors have been developed that allow for frequency tagging at higher frequencies (>60 Hz). This technique, rapid invisible frequency tagging (RIFT), provides two crucial advantages over low-frequency tagging as (i) it leaves low-frequency oscillations unperturbed, and thus open for investigation, and ii) it can render the tagging invisible, resulting in more naturalistic paradigms and a lack of participant awareness. The development of this technique has far-reaching implications as oscillations involved in cognitive processes can be investigated, and potentially manipulated, in a more naturalistic manner.},
keywords = {PROPixx},
pubstate = {published},
tppubtype = {article}
}
Levi, Aaron J.; Zhao, Yuan; Park, Il Memming; Huk, Alexander C.
Sensory and Choice Responses in MT Distinct from Motion Encoding Journal Article
In: Journal of Neuroscience, vol. 43, no. 12, pp. 2090–2103, 2023, ISSN: 0270-6474, 1529-2401, (Publisher: Society for Neuroscience Section: Research Articles).
Abstract | Links | BibTeX | Tags: DATAPixx, PROPixx
@article{levi_sensory_2023,
title = {Sensory and Choice Responses in MT Distinct from Motion Encoding},
author = {Aaron J. Levi and Yuan Zhao and Il Memming Park and Alexander C. Huk},
url = {https://www.jneurosci.org/content/43/12/2090},
doi = {10.1523/JNEUROSCI.0267-22.2023},
issn = {0270-6474, 1529-2401},
year = {2023},
date = {2023-03-01},
urldate = {2023-12-21},
journal = {Journal of Neuroscience},
volume = {43},
number = {12},
pages = {2090–2103},
abstract = {The macaque middle temporal (MT) area is well known for its visual motion selectivity and relevance to motion perception, but the possibility of it also reflecting higher-level cognitive functions has largely been ignored. We tested for effects of task performance distinct from sensory encoding by manipulating subjects' temporal evidence-weighting strategy during a direction discrimination task while performing electrophysiological recordings from groups of MT neurons in rhesus macaques (one male, one female). This revealed multiple components of MT responses that were, surprisingly, not interpretable as behaviorally relevant modulations of motion encoding, or as bottom-up consequences of the readout of motion direction from MT. The time-varying motion-driven responses of MT were strongly affected by our strategic manipulation—but with time courses opposite the subjects' temporal weighting strategies. Furthermore, large choice-correlated signals were represented in population activity distinct from its motion responses, with multiple phases that lagged psychophysical readout and even continued after the stimulus (but which preceded motor responses). In summary, a novel experimental manipulation of strategy allowed us to control the time course of readout to challenge the correlation between sensory responses and choices, and population-level analyses of simultaneously recorded ensembles allowed us to identify strong signals that were so distinct from direction encoding that conventional, single-neuron-centric analyses could not have revealed or properly characterized them. Together, these approaches revealed multiple cognitive contributions to MT responses that are task related but not functionally relevant to encoding or decoding of motion for psychophysical direction discrimination, providing a new perspective on the assumed status of MT as a simple sensory area.
SIGNIFICANCE STATEMENT This study extends understanding of the middle temporal (MT) area beyond its representation of visual motion. Combining multineuron recordings, population-level analyses, and controlled manipulation of task strategy, we exposed signals that depended on changes in temporal weighting strategy, but did not manifest as feedforward effects on behavior. This was demonstrated by (1) an inverse relationship between temporal dynamics of behavioral readout and sensory encoding, (2) a choice-correlated signal that always lagged the stimulus time points most correlated with decisions, and (3) a distinct choice-correlated signal after the stimulus. These findings invite re-evaluation of MT for functions outside of its established sensory role and highlight the power of experimenter-controlled changes in temporal strategy, coupled with recording and analysis approaches that transcend the single-neuron perspective.},
note = {Publisher: Society for Neuroscience
Section: Research Articles},
keywords = {DATAPixx, PROPixx},
pubstate = {published},
tppubtype = {article}
}
SIGNIFICANCE STATEMENT This study extends understanding of the middle temporal (MT) area beyond its representation of visual motion. Combining multineuron recordings, population-level analyses, and controlled manipulation of task strategy, we exposed signals that depended on changes in temporal weighting strategy, but did not manifest as feedforward effects on behavior. This was demonstrated by (1) an inverse relationship between temporal dynamics of behavioral readout and sensory encoding, (2) a choice-correlated signal that always lagged the stimulus time points most correlated with decisions, and (3) a distinct choice-correlated signal after the stimulus. These findings invite re-evaluation of MT for functions outside of its established sensory role and highlight the power of experimenter-controlled changes in temporal strategy, coupled with recording and analysis approaches that transcend the single-neuron perspective.
Weise, Annekathrin; Hartmann, Thomas; Parmentier, Fabrice; Weisz, Nathan; Ruhnau, Philipp
Involuntary shifts of spatial attention contribute to distraction—Evidence from oscillatory alpha power and reaction time data Journal Article
In: Psychophysiology, vol. 60, no. 10, pp. e14353, 2023, ISSN: 1469-8986, (_eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/psyp.14353).
Abstract | Links | BibTeX | Tags: DATAPixx2, PROPixx, RESPONSEPixxMRI, SOUNDPixx
@article{weise_involuntary_2023,
title = {Involuntary shifts of spatial attention contribute to distraction—Evidence from oscillatory alpha power and reaction time data},
author = {Annekathrin Weise and Thomas Hartmann and Fabrice Parmentier and Nathan Weisz and Philipp Ruhnau},
url = {https://onlinelibrary.wiley.com/doi/abs/10.1111/psyp.14353},
doi = {10.1111/psyp.14353},
issn = {1469-8986},
year = {2023},
date = {2023-01-01},
urldate = {2023-12-21},
journal = {Psychophysiology},
volume = {60},
number = {10},
pages = {e14353},
abstract = {Imagine you are focusing on the traffic on a busy street to ride your bike safely when suddenly you hear the siren of an ambulance. This unexpected sound involuntarily captures your attention and interferes with ongoing performance. We tested whether this type of distraction involves a spatial shift of attention. We measured behavioral data and magnetoencephalographic alpha power during a cross-modal paradigm that combined an exogenous cueing task and a distraction task. In each trial, a task-irrelevant sound preceded a visual target (left or right). The sound was usually the same animal sound (i.e., standard sound). Rarely, it was replaced by an unexpected environmental sound (i.e., deviant sound). Fifty percent of the deviants occurred on the same side as the target, and 50% occurred on the opposite side. Participants responded to the location of the target. As expected, responses were slower to targets that followed a deviant compared to a standard. Crucially, this distraction effect was mitigated by the spatial relationship between the targets and the deviants: responses were faster when targets followed deviants on the same versus different side, indexing a spatial shift of attention. This was further corroborated by a posterior alpha power modulation that was higher in the hemisphere ipsilateral (vs. contralateral) to the location of the attention-capturing deviant. We suggest that this alpha power lateralization reflects a spatial attention bias. Overall, our data support the contention that spatial shifts of attention contribute to deviant distraction.},
note = {_eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/psyp.14353},
keywords = {DATAPixx2, PROPixx, RESPONSEPixxMRI, SOUNDPixx},
pubstate = {published},
tppubtype = {article}
}
Brickwedde, Marion; Bezsudnova, Yulia; Kowalczyk, Anna; Jensen, Ole; Zhigalov, Alexander
Application of rapid invisible frequency tagging for brain computer interfaces Journal Article
In: Journal of Neuroscience Methods, vol. 382, pp. 109726, 2022, ISSN: 0165-0270.
Abstract | Links | BibTeX | Tags: PROPixx
@article{brickwedde_application_2022,
title = {Application of rapid invisible frequency tagging for brain computer interfaces},
author = {Marion Brickwedde and Yulia Bezsudnova and Anna Kowalczyk and Ole Jensen and Alexander Zhigalov},
url = {https://www.sciencedirect.com/science/article/pii/S0165027022002527},
doi = {10.1016/j.jneumeth.2022.109726},
issn = {0165-0270},
year = {2022},
date = {2022-12-01},
urldate = {2024-01-31},
journal = {Journal of Neuroscience Methods},
volume = {382},
pages = {109726},
abstract = {Background
Brain-computer interfaces (BCI) based on steady-state visual evoked potentials (SSVEPs/SSVEFs) are among the most commonly used BCI systems. They require participants to covertly attend to visual objects flickering at specified frequencies. The attended location is decoded online by analysing the power of neuronal responses at the flicker frequency.
New method
We implemented a novel rapid invisible frequency-tagging technique, utilizing a state-of-the-art projector with refresh rates of up to 1440 Hz. We flickered the luminance of visual objects at 56 and 60 Hz, which was invisible to participants but produced strong neuronal responses measurable with magnetoencephalography (MEG). The direction of covert attention, decoded from frequency-tagging responses, was used to control an online BCI PONG game.
Results
Our results show that seven out of eight participants were able to play the pong game controlled by the frequency-tagging signal, with average accuracies exceeding 60 %. Importantly, participants were able to modulate the power of the frequency-tagging response within a 1-second interval, while only seven occipital sensors were required to reliably decode the neuronal response.
Comparison with existing methods
In contrast to existing SSVEP-based BCI systems, rapid frequency-tagging does not produce a visible flicker. This extends the time-period participants can use it without fatigue, by avoiding distracting visual input. Furthermore, higher frequencies increase the temporal resolution of decoding, resulting in higher communication rates.
Conclusion
Using rapid invisible frequency-tagging opens new avenues for fundamental research and practical applications. In combination with novel optically pumped magnetometers (OPMs), it could facilitate the development of high-speed and mobile next-generation BCI systems.},
keywords = {PROPixx},
pubstate = {published},
tppubtype = {article}
}
Brain-computer interfaces (BCI) based on steady-state visual evoked potentials (SSVEPs/SSVEFs) are among the most commonly used BCI systems. They require participants to covertly attend to visual objects flickering at specified frequencies. The attended location is decoded online by analysing the power of neuronal responses at the flicker frequency.
New method
We implemented a novel rapid invisible frequency-tagging technique, utilizing a state-of-the-art projector with refresh rates of up to 1440 Hz. We flickered the luminance of visual objects at 56 and 60 Hz, which was invisible to participants but produced strong neuronal responses measurable with magnetoencephalography (MEG). The direction of covert attention, decoded from frequency-tagging responses, was used to control an online BCI PONG game.
Results
Our results show that seven out of eight participants were able to play the pong game controlled by the frequency-tagging signal, with average accuracies exceeding 60 %. Importantly, participants were able to modulate the power of the frequency-tagging response within a 1-second interval, while only seven occipital sensors were required to reliably decode the neuronal response.
Comparison with existing methods
In contrast to existing SSVEP-based BCI systems, rapid frequency-tagging does not produce a visible flicker. This extends the time-period participants can use it without fatigue, by avoiding distracting visual input. Furthermore, higher frequencies increase the temporal resolution of decoding, resulting in higher communication rates.
Conclusion
Using rapid invisible frequency-tagging opens new avenues for fundamental research and practical applications. In combination with novel optically pumped magnetometers (OPMs), it could facilitate the development of high-speed and mobile next-generation BCI systems.
Szekely, Brian; Shankar, Bharath; MacNeilage, Paul
Comparison of human foveal contrast sensitivity during walking and standing Journal Article
In: Journal of Vision, vol. 22, no. 14, pp. 4329, 2022, ISSN: 1534-7362.
Abstract | Links | BibTeX | Tags: PROPixx
@article{szekely_comparison_2022,
title = {Comparison of human foveal contrast sensitivity during walking and standing},
author = {Brian Szekely and Bharath Shankar and Paul MacNeilage},
url = {https://doi.org/10.1167/jov.22.14.4329},
doi = {10.1167/jov.22.14.4329},
issn = {1534-7362},
year = {2022},
date = {2022-12-01},
urldate = {2023-12-21},
journal = {Journal of Vision},
volume = {22},
number = {14},
pages = {4329},
abstract = {Prior studies comparing human contrast sensitivity during walking and standing have reported poorer contrast sensitivity during walking (Benjamin et al 2017; Cao & Haendel 2019). Here we present preliminary evidence suggesting that human foveal contrast sensitivity may be improved during walking compared to standing despite increased retinal image motion of the target. Ten human subjects judged the orientation (+/-45 deg) of Gabor targets (textasciitilde11 cpd, 4° diameter, 32 msec duration) presented in the center of a projection screen (ProPixx-Vpixx) against a grey background at a distance of textasciitilde1.75 m in an otherwise darkened room. The contrast of the Gabor targets was fixed at a level that was previously determined to yield textasciitilde79% correct responses for each subject during standing. The timing of target presentation was randomized relative to the previous response and there was no fixation point. In a counterbalanced design, the mean percent correct was significantly better (p=0.04) during walking on a treadmill at 1.3 m/s (83±7%) than during standing (75±9%). Eye, head, and heel movements were tracked (EyeLink 2, Optitrack) and used to estimate viewing distance, retinal image motion, and timing of stimulus presentation relative to the locomotor stride cycle. Average viewing distance was nearer (and spatial frequency lower) during walking than standing (1.79 vs 1.68 m; 11.2 vs 10.5 cpd), but mean retinal image motion was increased during walking (1.84±3.23°/s) compared to standing (0.23±1.33), with the greatest retinal image motion (and least percent correct) during the heel-strike phase. Pending experiments to better control for differences in viewing distance and pupil dilation between standing and walking, this result may indicate enhanced dynamic visual acuity during walking that allows overcoming visual perturbations introduced by active movement.},
keywords = {PROPixx},
pubstate = {published},
tppubtype = {article}
}
Fiset, Daniel; Blais, Caroline
An empirical comparison of online and in-lab data collection using a data-driven method on Pack&Go (VPixx Technologies) Journal Article
In: Journal of Vision, vol. 22, no. 14, pp. 4417, 2022, ISSN: 1534-7362.
Abstract | Links | BibTeX | Tags: Pack&Go
@article{fiset_empirical_2022,
title = {An empirical comparison of online and in-lab data collection using a data-driven method on Pack&Go (VPixx Technologies)},
author = {Daniel Fiset and Caroline Blais},
url = {https://doi.org/10.1167/jov.22.14.4417},
doi = {10.1167/jov.22.14.4417},
issn = {1534-7362},
year = {2022},
date = {2022-12-01},
urldate = {2023-12-21},
journal = {Journal of Vision},
volume = {22},
number = {14},
pages = {4417},
abstract = {In recent years, new challenges have emerged for vision scientists. Firstly, the growing awareness that most of our theories are based on samples that do not reflect diversity – namely, Western, Educated, Industrialized, Rich, and Democratic (WEIRD) samples – stresses the need to reach individuals as diverse as possible for future studies. Secondly, the COVID-19 has put great constraints on our capacity to bring participants to the lab. In reaction to these challenges, new technologies have been developed to allow researchers to collect data on the internet. These technologies, however, are often ill adapted to the experimental paradigms we have developed in the field. For instance, they are often not designed to allow modifications of stimuli as a function of participants’ responses. Moreover, they are not well adapted to the use of data-driven classification image methods. In the present study, we tested a new platform for online experiments: Pack&Go from VPixx Technologies. This platform runs Matlab/Psychtoolbox3 experiments online. We tested participants on a categorization task using Bubbles, a data-driven method allowing to reveal visual information utilization. In Phase 1, participants were tested in the lab, in three different conditions each comprising 1000 trials: 1) The experimental code was run locally; 2) The experiment was conducted on the same hardware, but the experimental code was run by Pack&Go; 3) The experiment was conducted on a different computer, and the experimental code was run by Pack&Go. In Phase 2, we tested participants that were recruited using a panel provider (Prolifics), and tested from their home using Pack&Go. In all conditions, the exact same experimental code was used, making it easy to compare the results across conditions. The pattern of findings was well replicated across conditions. The pros and cons of testing data-driven methods online will be discussed.},
keywords = {Pack&Go},
pubstate = {published},
tppubtype = {article}
}
Sharp, Poppy; Gutteling, Tjerk; Melcher, David; Hickey, Clayton
Spatial Attention Tunes Temporal Processing in Early Visual Cortex by Speeding and Slowing Alpha Oscillations Journal Article
In: Journal of Neuroscience, vol. 42, no. 41, pp. 7824–7832, 2022, ISSN: 0270-6474, 1529-2401, (Publisher: Society for Neuroscience Section: Research Articles).
Abstract | Links | BibTeX | Tags: PROPixx, RESPONSEPixxMRI
@article{sharp_spatial_2022,
title = {Spatial Attention Tunes Temporal Processing in Early Visual Cortex by Speeding and Slowing Alpha Oscillations},
author = {Poppy Sharp and Tjerk Gutteling and David Melcher and Clayton Hickey},
url = {https://www.jneurosci.org/content/42/41/7824},
doi = {10.1523/JNEUROSCI.0509-22.2022},
issn = {0270-6474, 1529-2401},
year = {2022},
date = {2022-10-01},
urldate = {2023-12-21},
journal = {Journal of Neuroscience},
volume = {42},
number = {41},
pages = {7824–7832},
abstract = {The perception of dynamic visual stimuli relies on two apparently conflicting perceptual mechanisms: rapid visual input must sometimes be integrated into unitary percepts but at other times must be segregated or parsed into separate objects or events. Though they have opposite effects on our perceptual experience, the deployment of spatial attention benefits both operations. Little is known about the neural mechanisms underlying this impact of spatial attention on temporal perception. Here, we record magnetoencephalography (MEG) in male and female humans to demonstrate that the deployment of spatial attention for the purpose of segregating or integrating visual stimuli impacts prestimulus oscillatory activity in retinotopic visual brain areas where the attended location is represented. Alpha band oscillations contralateral to an attended location are therefore faster than ipsilateral oscillations when stimuli appearing at this location will need to be segregated, but slower in expectation of the need for integration, consistent with the idea that α frequency is linked to perceptual sampling rate. These results demonstrate a novel interaction between temporal visual processing and the allocation of attention in space.
SIGNIFICANCE STATEMENT Our environment is dynamic and visual input therefore varies over time. To make sense of continuously changing information, our visual system balances two complementary processes: temporal segregation in order to identify changes, and temporal integration to identify consistencies in time. When we know that a circumstance requires use of one or the other of these operations, we are able to prepare for this, and this preparation can be tracked in oscillatory brain activity. Here, we show how this preparation for temporal processing can be focused spatially. When we expect to integrate or segregate visual stimuli that will appear at a specific location, oscillatory brain activity changes in visual areas responsible for the representation of that location. In this way, spatial and temporal mechanisms interact to support adaptive, efficient perception.},
note = {Publisher: Society for Neuroscience
Section: Research Articles},
keywords = {PROPixx, RESPONSEPixxMRI},
pubstate = {published},
tppubtype = {article}
}
SIGNIFICANCE STATEMENT Our environment is dynamic and visual input therefore varies over time. To make sense of continuously changing information, our visual system balances two complementary processes: temporal segregation in order to identify changes, and temporal integration to identify consistencies in time. When we know that a circumstance requires use of one or the other of these operations, we are able to prepare for this, and this preparation can be tracked in oscillatory brain activity. Here, we show how this preparation for temporal processing can be focused spatially. When we expect to integrate or segregate visual stimuli that will appear at a specific location, oscillatory brain activity changes in visual areas responsible for the representation of that location. In this way, spatial and temporal mechanisms interact to support adaptive, efficient perception.
Goddard, Erin; Shooner, Christopher; Mullen, Kathy T.
Magnetoencephalography contrast adaptation reflects perceptual adaptation Journal Article
In: Journal of Vision, vol. 22, no. 10, pp. 16, 2022, ISSN: 1534-7362.
Abstract | Links | BibTeX | Tags: PROPixx
@article{goddard_magnetoencephalography_2022,
title = {Magnetoencephalography contrast adaptation reflects perceptual adaptation},
author = {Erin Goddard and Christopher Shooner and Kathy T. Mullen},
url = {https://doi.org/10.1167/jov.22.10.16},
doi = {10.1167/jov.22.10.16},
issn = {1534-7362},
year = {2022},
date = {2022-09-01},
urldate = {2023-12-21},
journal = {Journal of Vision},
volume = {22},
number = {10},
pages = {16},
abstract = {Contrast adaptation is a fundamental visual process that has been extensively investigated and used to infer the selectivity of visual cortex. We recently reported an apparent disconnect between the effects of contrast adaptation on perception and functional magnetic resonance imaging BOLD response adaptation, in which adaptation between chromatic and achromatic stimuli measured psychophysically showed greater selectivity than adaptation measured using BOLD signals. Here we used magnetoencephalography (MEG) recordings of neural responses to the same chromatic and achromatic adaptation conditions to characterize the neural effects of contrast adaptation and to determine whether BOLD adaptation or MEG better reflect the measured perceptual effects. Participants viewed achromatic, L-M isolating, or S-cone isolating radial sinusoids before adaptation and after adaptation to each of the three contrast directions. We measured adaptation-related changes in the neural response to a range of stimulus contrast amplitudes using two measures of the MEG response: the overall response amplitude, and a novel time-resolved measure of the contrast response function, derived from a classification analysis combined with multidimensional scaling. Within-stimulus adaptation effects on the contrast response functions in each case showed a pattern of contrast-gain or a combination of contrast-gain and response-gain effects. Cross-stimulus adaptation conditions showed that adaptation effects were highly stimulus selective across early, ventral, and dorsal visual cortical areas, consistent with the perceptual effects.},
keywords = {PROPixx},
pubstate = {published},
tppubtype = {article}
}