Skip to main content
eScholarship
Open Access Publications from the University of California

This series is automatically populated with publications deposited by UC Merced Department of Cognitive Science researchers in accordance with the University of California’s open access policies. For more information see Open Access Policy Deposits and the UC Publication Management System.

Cover page of Hearing Parents’ Use of Auditory, Visual, and Tactile Cues as a Function of Child Hearing Status

Hearing Parents’ Use of Auditory, Visual, and Tactile Cues as a Function of Child Hearing Status

(2018)

Parent-child dyads in which the child is deaf but the parent is hearing present a unique opportunity to examine parents’ use of non-auditory cues, particularly vision and touch, to establish communicative intent. This study examines the multimodal communication patterns of hearing parents during a free play task with their hearing (N=9) or deaf (N=9) children. Specifically, we coded parents’ use of multimodal cues in the service of establishing joint attention with their children. Dyad types were compared for overall use of multimodal – auditory, visual, and tactile – attention-establishing cues, and for the overall number of successful and failed bids by a parent for a child’s attention. The relationship between multimodal behaviors on the part of the parent were tracked for whether they resulted in successful or failed initiation of joint attention. We focus our interpretation of the results on how hearing parents differentially accommodate their hearing and deaf children to engage them in joint attention. Findings can inform the development of recommendations for hearing parents of deaf children who are candidates for cochlear implantation regarding communication strategies to use prior to a child’s implantation. Moreover, these findings expand our understanding of how joint attention is established between parents and their preverbal children, regardless of children’s hearing status.

Cover page of Influence of cognitive demand and auditory noise on postural dynamics

Influence of cognitive demand and auditory noise on postural dynamics

(2025)

The control of human balance involves an interaction between the human motor, cognitive, and sensory systems. The dynamics of this interaction are yet to be fully understood, however, work has shown the performance of cognitive tasks to have a hampering effect on motor performance, while additive sensory noise to have a beneficial effect. The current study aims to examine whether postural control will be impacted by a concurrent working memory task, and similarly, if additive noise can counteract the expected negative influence of the added cognitive demand. Postural sway of healthy young adults was collected during the performance of a modified N-back task with varying difficulty, in the presence and absence of auditory noise. Our results show a reduction in postural stability scaled to the difficulty of the cognitive task, but this effect is less prominent in the presence of additive noise. Additionally, by separating postural sway into different frequency bands, typically used to assess the exploratory vs feedback-driven stabilizing dynamics of sway, we found a differential effect between the cognitive task and additive noise, thus demonstrating that both frequency regimes of postural sway are sensitive to high cognitive load and increased sensory information.

Noisy-channel language comprehension in aphasia: A Bayesian mixture modeling approach

(2025)

Individuals with "agrammatic" receptive aphasia have long been known to rely on semantic plausibility rather than syntactic cues when interpreting sentences. In contrast to early interpretations of this pattern as indicative of a deficit in syntactic knowledge, a recent proposal views agrammatic comprehension as a case of "noisy-channel" language processing with an increased expectation of noise in the input relative to healthy adults. Here, we investigate the nature of the noise model in aphasia and whether it is adapted to the statistics of the environment. We first replicate findings that a) healthy adults (N = 40) make inferences about the intended meaning of a sentence by weighing the prior probability of an intended sentence against the likelihood of a noise corruption and b) their estimate of the probability of noise increases when there are more errors in the input (manipulated via exposure sentences). We then extend prior findings that adults with chronic post-stroke aphasia (N = 28) and healthy age-matched adults (N = 19) similarly engage in noisy-channel inference during comprehension. We use a hierarchical latent mixture modeling approach to account for the fact that rates of guessing are likely to differ between healthy controls and individuals with aphasia and capture individual differences in the tendency to make inferences. We show that individuals with aphasia are more likely than healthy controls to draw noisy-channel inferences when interpreting semantically implausible sentences, even when group differences in the tendency to guess are accounted for. While healthy adults rapidly adapt their inference rates to an increase in noise in their input, whether individuals with aphasia do the same remains equivocal. Further investigation of comprehension through a noisy-channel lens holds promise for a parsimonious understanding of language processing in aphasia and may suggest potential avenues for treatment.

Cover page of Shifting the Level of Selection in Science.

Shifting the Level of Selection in Science.

(2024)

Criteria for recognizing and rewarding scientists primarily focus on individual contributions. This creates a conflict between what is best for scientists careers and what is best for science. In this article, we show how the theory of multilevel selection provides conceptual tools for modifying incentives to better align individual and collective interests. A core principle is the need to account for indirect effects by shifting the level at which selection operates from individuals to the groups in which individuals are embedded. This principle is used in several fields to improve collective outcomes, including animal husbandry, team sports, and professional organizations. Shifting the level of selection has the potential to ameliorate several problems in contemporary science, including accounting for scientists diverse contributions to knowledge generation, reducing individual-level competition, and promoting specialization and team science. We discuss the difficulties associated with shifting the level of selection and outline directions for future development in this domain.

The language network ages well: Preserved selectivity, lateralization, and within-network functional synchronization in older brains

(2024)

Healthy aging is associated with structural and functional brain changes. However, cognitive abilities differ from one another in how they change with age: whereas executive functions, like working memory, show age-related decline, aspects of linguistic processing remain relatively preserved (Hartshorne et al., 2015). This heterogeneity of the cognitive-behavioral landscape in aging predicts differences among brain networks in whether and how they should change with age. To evaluate this prediction, we used individual-subject fMRI analyses ('precision fMRI') to examine the language-selective network (Fedorenko et al., 2024) and the Multiple Demand (MD) network, which supports executive functions (Duncan et al., 2020), in older adults (n=77) relative to young controls (n=470). In line with past claims, relative to young adults, the MD network of older adults shows weaker and less spatially extensive activations during an executive function task and reduced within-network functional synchronization. However, in stark contrast to the MD network, we find remarkable preservation of the language network in older adults. Their language network responds to language as strongly and selectively as in younger adults, and is similarly lateralized and internally synchronized. In other words, the language network of older adults looks indistinguishable from that of younger adults. Our findings align with behavioral preservation of language skills in aging and suggest that some networks remain young-like, at least on standard measures of function and connectivity.

Cover page of Investigating the role of auditory cues in modulating motor timing: insights from EEG and deep learning

Investigating the role of auditory cues in modulating motor timing: insights from EEG and deep learning

(2024)

Research on action-based timing has shed light on the temporal dynamics of sensorimotor coordination. This study investigates the neural mechanisms underlying action-based timing, particularly during finger-tapping tasks involving synchronized and syncopated patterns. Twelve healthy participants completed a continuation task, alternating between tapping in time with an auditory metronome (pacing) and continuing without it (continuation). Electroencephalography data were collected to explore how neural activity changes across these coordination modes and phases. We applied deep learning methods to classify single-trial electroencephalography data and predict behavioral timing conditions. Results showed significant classification accuracy for distinguishing between pacing and continuation phases, particularly during the presence of auditory cues, emphasizing the role of auditory input in motor timing. However, when auditory components were removed from the electroencephalography data, the differentiation between phases became inconclusive. Mean accuracy asynchrony, a measure of timing error, emerged as a superior predictor of performance variability compared to inter-response interval. These findings highlight the importance of auditory cues in modulating motor timing behaviors and present the challenges of isolating motor activation in the absence of auditory stimuli. Our study offers new insights into the neural dynamics of motor timing and demonstrates the utility of deep learning in analyzing single-trial electroencephalography data.

Cover page of Overtrust in AI Recommendations About Whether or Not to Kill: Evidence from Two Human-Robot Interaction Studies.

Overtrust in AI Recommendations About Whether or Not to Kill: Evidence from Two Human-Robot Interaction Studies.

(2024)

This research explores prospective determinants of trust in the recommendations of artificial agents regarding decisions to kill, using a novel visual challenge paradigm simulating threat-identification (enemy combatants vs. civilians) under uncertainty. In Experiment 1, we compared trust in the advice of a physically embodied versus screen-mediated anthropomorphic robot, observing no effects of embodiment; in Experiment 2, we manipulated the relative anthropomorphism of virtual robots, observing modestly greater trust in the most anthropomorphic agent relative to the least. Across studies, when any version of the agent randomly disagreed, participants reversed their threat-identifications and decisions to kill in the majority of cases, substantially degrading their initial performance. Participants subjective confidence in their decisions tracked whether the agent (dis)agreed, while both decision-reversals and confidence were moderated by appraisals of the agents intelligence. The overall findings indicate a strong propensity to overtrust unreliable AI in life-or-death decisions made under uncertainty.