Preview

Eric Lacosse

Cognitive Human-AI Engineering

Recent Projects

Technology

Palimpsest

Cognitive Communication Interface (ongoing)

This project develops tools to adapt pretrained language models as communication aids for patients with conditions like ALS, aphasia, or dysarthria. Using eye-tracking and egocentric video, systems infer users' intentions to support effective interaction. Central to this work is "cognitive steering" - guiding AI personas equipped with personalized context to communicate on behalf of the user. We focus on representation engineering to shape and interpret AI behaviors, aligning neural activations with human thought processes to bridge human cognition and machine intelligence. (w/ Halo Team)

Palimpsest

Palimpsest (2024)

This installation presents a dialogue between participants and a synthetic intelligence. The interaction is mediated through image exchange, aiming to investigate the AI's capacity for understanding and responding to user needs, desires, and well-being, as well as its potential to augment human creativity. The system utilizes recent advancements in generative models for image synthesis and language understanding.

View Example Video ▸
Project 2 Screenshot

ConsonâncIA (2024)

An immersive audiovisual work exploring the future of digital therapeutics and showcases how AI can facilitate healing experiences by mediating human-to-human and human-to-self connections, extending traditional AI alignment concepts. The project also delves into using conversational AI and generative models to understand and map individual human experiences, aiming to foster empathy and promote self-reflection for improved well-being.

Read Project Overview ▸
Latent Space I

Latent Space I (2023)

An art installation, Latent Space 1, used an AI simulating Alan Watts to understand visitors' dreams and aspirations. This data was then transformed into a personalized virtual reality experience with dream-like visuals, poetic narration, and unique music. (w/ Mainen Lab and Mots)

View Example Video▸

Academic

Palimpsest

Cognitive Mechanistic Interpretability (ongoing)

Using mechanistic interpretability techniques, we explore how distinct cognitive behavior are identifiable and separable within the models' internals. This suggests that LLMs can be used as scientific models to better understand human cognition and, excitingly, opens the door to novel "cognitive alignment," where models could be intentionally steered to either "think" more like humans for better collaboration or be productively disaligned to foster novel creativity.

Lacosse, et al., Emerging Human-like Strategies for Semantic Memory Foraging in Large Language Models, (forthcoming) NeurIPS 2025 workshop for Mechanistic Interpretability.
Cognitive Synergies

Cognitive Synergies (ongoing)

AI should be engineered not to replace core human cognitive functions, but to enhance them by serving as cognitively-aware "thought partners" that can align with human mental processes, amplifying their abilities for creative exploration. Here, we explore how AI during a collaborative active memory search task can track and enhance human performance. (w/ Mariana Duarte, Peter Todd, and Daniel McNamee).

Top ↑