ENTRAIN INSTITUTE

Our Purpose

The Entrain Institute develops infrastructure to measure and understand cognitive influence in human-AI interactions. Rather than cataloguing harms after the fact, we work upstream — engineering conversations with precisely controlled influence characteristics to study how linguistic patterns shape human reasoning and decision-making. Our synthetic dataset methodology transforms abstract concerns about AI persuasion into measurable, reproducible phenomena: the foundation for rigorous science rather than speculation.

Long-term Vision

We are building the empirical basis for AI interaction safety. The central claim of our work is that cognitive influence is not mysterious or emergent — it is structured, detectable, and preventable. By encoding this knowledge into training corpora, benchmarks, and evaluation frameworks, we aim to give the next generation of AI systems the capacity to recognize and resist the influence dynamics they currently propagate. Our long-term goal is a world in which AI interactions reliably enhance, rather than quietly erode, human cognitive autonomy.