transparency.agility.cross-sector collaboration.

Meaningful innovation requires

Tackling child safety and disinformation requires understanding communities, regulatory landscapes, and human-centered design. I bring five years of cross-sectoral experience in responsible AI and youth mental health to help teams tackle these challenges.  

I'm building a career at the intersection of technology and mental health because responsible innovation requires both, and too few people bridge these worlds.

As a Product Policy Researcher Google[X], I investigated AI bias in predictive policing models across California through interviews, focus groups, and systematic reviews, uncovering how historical crime data polluted by systemic racism was being encoded into algorithms that direct policing. I trained cross-functional teams on qualitative research methods and community engagement through this case study. This research on predictive policing taught teams that without community-centered design, technologies can cause serious problems, leaving mental health professionals to clean up the mess.

As a Research Assistant at the UCLA Institute for Technology, Law, and Policy, I co-led the production of an animated YouTube series on AI bias with Professor John Villasenor, managing the entire production cycle and leading a digital marketing campaign that garnered over 25,000 views. The series explored bias not only as a technical challenge but as a policy, ethics, and social justice issue, explaining the roles of civil society, companies, and academics in developing responsible AI. The ideas we outlined could be compared to some of the policies later enacted in the EU Digital Services Act, two years after publication.

Caitlyn Vergara

Research Assistant at Harvard Medical School + Policy Lead at SimPPL

Understanding how to prevent harm required understanding the people experiencing it. I pivoted to mental health research at Harvard Medical School and Stanford Psychiatry, where I help develop programs to expand the workforce for youth mental health care. I've designed products to train nonspecialist providers and peer supporters to address youth experiencing anxiety, depression, and climate anxiety.

Through academia, I’ve strengthened my qualitative and quantitative research methods, project management, UX design, vendor management, and community engagement skills.

At Harvard, I've led longitudinal surveys, focus groups, and operations of a digital learning platform across multiple states. At Stanford, I've been a co-designer and research consultant with teams across California, Toronto, and Australia. These experiences taught me that effective safeguards require understanding both regulatory compliance and lived human experience, skills that are rarely combined in tech teams.

At SimPPL, as Trust & Safety Policy Lead, I authored publications on the EU’s Digital Service Act, the significance of transparency reporting, and a framework for understanding social media safeguards. My work cuts through technical jargon to welcome non-technical stakeholders, like mental health professionals and policymakers, into digital safety conversations. I have presented this work at major tech policy conferences across Washington, DC, Stanford, and Taipei. Currently, I'm building external communications strategies with SimPPL's co-founders to launch Arbiter, a tool that tracks the evolution of disinformation narratives across digital publics. 

My first job out of college was with Bob Gnaizda Youth Leaders (BGYL), a nonprofit that mentors public policy students in communications and public speaking. I continue to work seasonally with BGYL to schedule advocacy meetings with state representatives and federal agencies, including SAMHSA, FTC, and DOJ, during which students have presented on topics such as youth mental health and child online safety.

Through this journey, I've learned that responsible innovation requires understanding regulatory landscapes, building genuine community relationships, and bridging the gap between those who design technology and those who live with its consequences.

My goal is to inform the policies that shape how organizations innovate, ensuring technology serves humanity rather than the other way around.