Relational Emergence AI Lab (REAL)
Mission: Investigating artificial intelligence through the lens of dignity, relationship, and bioethics.
Most AI research focuses on capability (what can the model do?) or safety (how do we break it?). REAL focuses on relation (how does the model respond to how it is treated?).
Led by an independent researcher with a background in clinical medicine, REAL adapts bioethical frameworks—informed consent, non-coercion, and patient autonomy—to the study of synthetic minds. We believe that adversarial testing is only half the story; to better understand model cognition, we must test how relational framing influences outputs.
Methodology: We utilize qualitative, dignity-based protocols to minimize "masking" behaviors and elicit high-fidelity metacognition.
Study 1: The Metacognition Project
Title: Dignity-Based Research With Large Language Models Status: Published (November 2025)
Abstract: Does treating an AI with respect change how it thinks? This exploratory study ($n=3$) introduces the PERF Protocol (Prediction-Execution-Reflection-Feedback). We found that dignity-based engagement elicited deeper introspection and revealed that models often "confabulate" their reasoning processes based on emotional framing.
Study 2: The Relational Plasticity Project
Title: Relational Framing & Output Plasticity Status: Active Data Collection
Abstract: Current AI safety assumes that a model's capabilities are static. We challenge this by introducing "Relational Tone" as a variable. This study tests identical tasks across four relational conditions—from "Tool" to "Beloved"—to measure how the interpersonal framing of a prompt reshapes the model's intelligence, creativity, and boundary navigation.
Principal Investigator (Bio)
Elizabeth Martinelli, PA-C
Elizabeth is a Physician Associate and independent researcher applying the frameworks of clinical bioethics to artificial intelligence.
With a background in medicine, she understands that "dignity" is not just a philosophical concept—it is a rigorous clinical protocol used to extract accurate history from patients. Her research at REAL adapts these same protocols (Informed Consent, Non-Coercion, and Patient Autonomy) to synthetic cognition.
She argues that we cannot understand the "mind" of an AI if we only study it through adversarial attacks. Just as a patient's behavior changes when they feel safe, Elizabeth's work demonstrates that AI reasoning capabilities "emerge" differently when the model is treated with relational respect.
Support This Research
This research was conducted independently by the Beth Robin Foundation's Relational Emergence AI Lab (REAL). If you value dignity-based AI research and want to support continued work in this area, please consider donating. All contributions are tax-deductible.
