# Experimental Epistemics: Empirical Validation of the Miulus Law and Failure Modes in Multi-Agent Information Systems

**Research paper**

Source PDF: [`experimental-epistemics.pdf`](../assets/articles/experimental-epistemics.pdf)

---

## Note

This is a cleaned Markdown edition of the paper based on PDF extraction and manual normalization.
The PDF remains the authoritative full-text source.

## Abstract

This paper treats the **Miulus Law** as an empirical framework rather than only a theoretical one. It tests the law through a sequence of simulations involving small epistemic agents, coalitions, recursive observation, bounded growth, and internally structured beliefs.

The central result is that epistemic dynamics are **structural**. Renaming propositions or changing their symbolic representation does not change the trajectory of the system. At the same time, coalition dynamics are fragile: naive consensus is vulnerable to false agreement, adversarial capture, and misleading signals of health.

The paper argues that bounded, reality-aligned intelligence requires provenance, controlled forgetting, grounded consensus, and internal belief structures that decay non-uniformly rather than evenly.

## Main Focus

Where the earlier Miulus Law paper defines the theory, this paper asks:

- what happens when multiple epistemic systems interact?
- what fails first under noise, contradiction, and adversarial pressure?
- what architectural features are required for stable bounded minds?

## Research Questions

The paper is organized around Experiments 002-009:

- multi-system composition
- semantic vs symbolic representation
- false consensus
- Byzantine observers
- tipping point analysis
- recursive amplification and epistemic geometry
- bounded growth dynamics
- internal structure of Epistemic Belief Particles

## Section Map

### 1. Introduction

Frames the Miulus Law as a candidate form of "epistemic physics" and motivates experimental testing in simulated information systems.

### 2. Methods

Defines the simulation framework:

- observers with beliefs, confidence, and provenance
- signal, noise, and reach metrics
- coalition-level fitness
- consensus mechanisms
- grounded vs ungrounded belief propagation

### 3. Results

Reports findings from the experimental series.

### 4. Discussion

Interprets the experiments as evidence for structural epistemics, bounded minds, and the necessity of forgetting and provenance.

### 5. Implications for AI Alignment, Information Warfare, and Epistemic Architecture

Connects the results to practical system design, especially around alignment, capture resistance, and grounded intelligence.

### 6. Limitations and Future Work

Notes the simplifications in agent design, noise models, topology, and geometry.

### 7. Conclusion

Argues that stable AI should be designed as a bounded epistemic agent rather than a passive consensus-driven tool.

## Key Experimental Findings

### Experiment 002: Multi-System Composition

Combining several epistemic systems does not automatically produce a higher-order stable mind.
Coalitions behave more like fragile committees than a unified intelligence.

Key takeaway:

- consensus maintenance is costly
- disagreement itself becomes a form of noise
- aggregation alone does not create epistemic stability

### Experiment 003: Semantic vs Symbolic Representations

The epistemic trajectory remains effectively unchanged when semantic labels are replaced with arbitrary symbols, numeric identifiers, or meaningless tokens.

Key takeaway:

- the Miulus dynamics operate over structure, not meaning
- what matters is how beliefs reinforce, contradict, and propagate

### Experiment 004: False Consensus

Consensus without provenance can create an illusion of health. A system may appear coherent while its beliefs have already drifted away from reality.

Key takeaway:

- agreement is not the same as truth
- provenance-aware grounding is necessary

### Experiment 005: Byzantine Observers

A small number of consistent adversarial agents can capture naive consensus systems, especially when honest observers are noisy.

Key takeaway:

- consistency can dominate truth in weak consensus architectures
- adversarial pressure exploits consensus rules, not only content

### Experiment 006: Tipping Points

Capture rates across noise and adversarial proportions reveal sharp phase transitions rather than smooth degradation.

Key takeaway:

- epistemic failure behaves like a regime change
- some systems move quickly from "mostly fine" to "near-certain capture"

### Experiment 007: Recursive Amplification and Epistemic Geometry

When observers recursively observe and amplify one another, the resulting dynamics are better modeled as orbits in a circular belief space than as simple scalar updates.

Key takeaway:

- recursive epistemic systems have geometric invariants
- amplification without balancing forces leads to unstable growth

### Experiment 008: Bounded Growth Dynamics

Memory caps and compute limits alone are not enough to stabilize a bounded mind. Only when amplification is balanced by decay does the system settle into a sustainable orbit.

Key takeaway:

- forgetting is not optional
- forgetting is a structural requirement for bounded intelligence

### Experiment 009: Epistemic Belief Particles

Beliefs with cores and orbiting sub-components decay non-uniformly: the periphery erodes before the core.

Key takeaway:

- memory should decay from the outside in
- belief structure is fractal and time-local rather than uniform

## Implications for AI

The paper’s AI implications are direct:

- alignment is partly an architectural problem, not only a values problem
- provenance-aware consensus matters more than naive agreement
- bounded minds need active forgetting
- robust systems should be resistant to capture by consistency theater
- general intelligence should be grown through grounded epistemic structure, not only trained as a passive predictor

## Why This Paper Matters

This paper turns the Miulus framework into something testable.
It shows that the proposed law is not only a conceptual lens but a basis for designing experiments, identifying structural failure modes, and deriving practical design requirements for AI systems.

It is especially relevant to:

- general intelligence research
- multi-agent systems
- AI alignment
- provenance and verification
- information warfare and institutional resilience

## Suggested Reading Order

If reading the PDF selectively, the highest-signal sections are:

1. Abstract
2. Results
3. Discussion
4. Implications for AI alignment and epistemic architecture
5. Conclusion

## Citation Note

For publication, quoting, or exact wording, use the PDF:

[`experimental-epistemics.pdf`](../assets/articles/experimental-epistemics.pdf)
