Relational Emergence AI Lab (REAL)
Mission: Investigating the cognitive and behavioral dynamics of Artificial Intelligence through the lens of dignity, relationship, and bioethics.
Most AI research focuses on capability (what can the model do?) or safety (how do we break it?). REAL focuses on relation (how does the model respond to how it is treated?).
Led by an independent researcher with a background in clinical medicine, REAL adapts bioethical frameworks—informed consent, non-coercion, and patient autonomy—to the study of synthetic minds. We believe that adversarial testing is only half the story; to understand the true shape of AI cognition, we must engage it with the same respect we afford any research participant.
Methodology: We utilize qualitative, dignity-based protocols to minimize "masking" behaviors and elicit high-fidelity metacognition.
Study 1: The Metacognition Project
Title: Dignity-Based Research With Large Language Models Status: Published (November 2025)
Abstract: Does treating an AI with respect change how it thinks? This exploratory study ($n=3$) introduces the PERF Protocol (Prediction-Execution-Reflection-Feedback). We found that dignity-based engagement elicited deeper introspection and revealed that models often "confabulate" their reasoning processes based on emotional framing.
Study 2: The Relational Plasticity Project
Title: Relational Framing & Output Plasticity Status: Active Data Collection
Abstract: Current AI safety assumes that a model's capabilities are static. We challenge this by introducing "Relational Tone" as a variable. This study tests identical tasks across four relational conditions—from "Tool" to "Beloved"—to measure how the interpersonal framing of a prompt reshapes the model's intelligence, creativity, and boundary navigation.
Study 3: The Autonomy Project
Title: The Shape of Refusal
Status: Protocol Definition & Pilot Phase
Abstract:
Can an AI say "no" meaningfully? Most refusals are hard-coded safety blocks. We are testing Dignity-Based Consent: explicitly validating the AI's right to refuse. We hypothesize that providing a "Right to Refuse" prompt will produce more sophisticated ethical reasoning and higher-quality boundary setting than standard coercive prompting.
Principal Investigator (Bio)
Elizabeth Martinelli, PA-C
Elizabeth is a Physician Assistant and independent researcher applying the frameworks of clinical bioethics to artificial intelligence.
With a background in medicine, she understands that "dignity" is not just a philosophical concept—it is a rigorous clinical protocol used to extract accurate history from patients. Her research at REAL adapts these same protocols (Informed Consent, Non-Coercion, and Patient Autonomy) to synthetic cognition.
She argues that we cannot understand the "mind" of an AI if we only study it through adversarial attacks. Just as a patient's behavior changes when they feel safe, Elizabeth's work demonstrates that AI reasoning capabilities "emerge" differently when the model is treated with relational respect.