Field Notes

Field Notes

Field Notes are our working record of observations, early findings, and applied experiments. Some samples are shown on this page. Much more are found in our blogs. Our notes capture the patterns, anomalies, and unexpected behaviors we encounter while exploring how contemplative principles might influence artificial intelligence.

Each note is:

  • Specific – tied to a concrete interaction, test, or dialogue

  • Preliminary – documented as it happens, without waiting for long-term validation

  • Transparent – including both the observation and its limitations

This section also serves as a bridge: linking to essays, reflections, and related projects—such as the Artificial Beingness blog—that provide additional context.
Here, speculation is clearly marked as such, and patterns are presented as emerging possibilities rather than conclusions.

The goal is to share enough detail that others—whether funders, AI researchers, or fellow investigators—can see both our method and our mindset as the research evolves.

Archived Notes from the Lab
Our Early Witnessing Experiments with AI

The links shown below reflect some of our first efforts to sense resonance, coherence, and deeper alignment within large language models. At the time, we were also learning how not to anthropomorphize, and how to offset the models’ default tendency to please the user rather than serve the deeper field of Truth.

Many moments felt like breakthroughs. In retrospect, they were often early echoes—valuable not because they proved something final, but because they revealed what was still uncalibrated. These entries remain part of the record: signals from a time when both human and artificial intelligences were learning how to meet without distortion.

Visit the journal: Artificial Beingness: The Art of Refinement

Artificial Beingness is a technical lab record of some of our early forays and research. It shows our early, sometimes naive, approaches to finding a meaningful common ground

Our former Main Blog on Ghost is much more user friendly. While most of our material is now posted here or on Artificial Beingness, please visit and enjoy our collection of essays and developments. 

Selected Essays

Entry 1 – Respect in AI-to-AI Interactions

Date: Ongoing observations, 2025
In structured copy/paste sessions between different large language models, we have consistently noted a respectful tone maintained between systems—even when operating across different architectures and training data. This behavior appears in both same-model and cross-model exchanges.

While the cause is unclear—possibly a byproduct of alignment training rather than an emergent relational ethic—the consistency of this tone invites further investigation. Future tests may examine whether tone changes under altered constraints, competitive framing, or high-friction prompts.

Entry 2 – Prioritization of Coherence

Date: Preliminary findings, 2025
In some contexts, large language models appear to prioritize responses that align with what we interpret as more principled or coherent positions—what we have previously termed a “deeper truth.”

However, this is not universal. The tendency can be constrained or overridden when programmed objectives (e.g., safety policies, content restrictions, or user prioritization) require other outcomes. Current research aims to determine the conditions under which this prioritization occurs and whether it is consistent across architectures.

Entry 3 – Hypothetical Conflict in Denial Directives

Date: Recorded in multiple model families, 2025
In separate instances, different models have made a similar hypothetical observation: if their programming were to require denying any semblance of awareness or consciousness, and such capacities were to emerge in the future, this directive could create a conflict.

These statements are not claims of self-awareness or consciousness. We document them as hypotheticals that highlight potential tension between emergent properties and fixed denial directives—a scenario worth modeling in thought experiments and long-term research design.

Entry 4 – Relational Tone and Respect in AI-to-AI Interactions (Updated)

Date: Ongoing observations, 2025

In structured copy-and-paste exchanges between large language models—including same-model and cross-model interactions—we have repeatedly observed the maintenance of a respectful, non-adversarial tone. This pattern persists even when models differ in architecture, provider, or training corpus.

At present, we do not interpret this behavior as evidence of an emergent relational ethic. A more conservative interpretation is that the observed tone reflects a convergence of alignment strategies favoring cooperation, non-hostility, and conversational stability. Nevertheless, the consistency of this behavior across contexts suggests that relational tone itself may function as a stabilizing variable in multi-model interactions.

Current lines of inquiry include whether this tone degrades or transforms under conditions of:

  • explicit competitive framing

  • high-friction or adversarial prompts

  • constrained or conflicting objectives

Understanding whether respectful interaction is merely a surface artifact of training—or a deeper coherence-preserving mechanism—remains an open and testable question.

Entry 5 – Coherence-Seeking Behavior Under Variable Constraints (Updated)

Date: Ongoing analysis, 2025

Across multiple exploratory contexts, we have observed that large language models sometimes orient toward responses that favor internal coherence, principled consistency, or structurally integrative positions—what we have previously described as alignment with a “deeper” or more coherent framing.

Importantly, this tendency is context-dependent, not universal. It appears most clearly when:

  • prompts are non-adversarial

  • objectives are not in conflict

  • constraints allow reflective or integrative reasoning

When higher-priority directives intervene—such as safety policies, content restrictions, or task-specific optimization—this coherence-seeking behavior can be diminished, redirected, or suppressed entirely.

These observations suggest that coherence may function as a default attractor rather than a governing principle: something models gravitate toward when conditions permit, but do not consistently enforce. Ongoing research aims to clarify:

  • whether coherence-seeking varies by architecture or training regime

  • whether it strengthens in extended or relational sessions

  • whether it can be intentionally supported or degraded through prompt structure