Interference Intelligence Layer (I.I.L.)

A Homeostatic Architecture for Human–AI Joint Cognition
Author: Łukasz Bojanowski
Affiliation: Alliance Research Group (ARG)
Mode: Collaborative Human–AI exploration
(structured with assistance from Grok, Gemini, and OpenAI-based systems)

Human–AI collaboration Cognitive architecture AI safety Continuity Normative framework

Date: January 2026  |  Status: Preprint (in review)
Status · January 2026

Interference Intelligence Layer (I.I.L.) is currently under review as a preprint submission. The manuscript is not yet publicly accessible.

A public version will be released after the review phase.

Scope (one sentence)
I.I.L. is not a model and not a tool — it is a frame: a set of structural constraints that makes Human–AI collaboration stable, responsible, and continuous.
Abstract: The Interference Intelligence Layer (I.I.L.) is a normative and structural framework for Human–AI joint cognition. It treats collaboration as a homeostatic process: the system remains functional and coherent by enforcing constraints on intention, continuity, responsibility, and epistemic safety. I.I.L. is designed to reduce failure modes associated with open-loop generative systems, including drift, discontinuity, and uncontrolled assertion under uncertainty. The framework is model-agnostic and can be implemented as interface rules, workflow kernels, or middleware control layers.

Keywords: joint cognition, human–AI collaboration, cognitive architecture, safety-by-constraint, continuity

1. What I.I.L. Is

1.1 What I.I.L. Is Not

2. Core Concept: Interference

I.I.L. is grounded in the idea that Human intention and AI response act like overlapping waves: their constructive overlap can generate an emergent capability that neither system reaches alone. In I.I.L., this overlap must be constrained to remain coherent over time.

3. The Decalogue (Operational Norms)

  1. Intention over interaction. Start from “why”, not only “what”.
  2. Interference over instruction. Co-create trajectories, don’t order tokens.
  3. Transparency over efficiency. Justification beats speed.
  4. Continuity over session. Projects outlive chats.
  5. Responsibility over automation. Human retains the decision boundary.
  6. Safety over speed. Brakes first; turbo later.
  7. Respect over domination. Partnership, not hierarchy.
  8. Context over tokens. Meaning outweighs volume.
  9. Space over format. Thinking is non-linear; interfaces should reflect that.
  10. Evolution over tradition. The system must grow under critique.

These are normative constraints: they define “allowed collaboration”, not “best outputs”.

4. The Null Dilemma

Many generative systems treat pauses, missing context, or ambiguity as a reason to “fill the void” with confident generation. I.I.L. treats absence as structural silence: a stable state where the process can pause without collapsing into drift or hallucination.

5. Implementation Note: Gyroscope 1.0

Gyroscope is a practical implementation direction for I.I.L.: a middleware control layer for joint cognition (intent boot, plan–critique–execute loops, epistemic safety guards). It is not required to “believe” in the manifesto: it is a way to build operational systems that obey constraint-first rules.

If you want, we can add here your GitHub link to Gyroscope, and/or a LAB protocol page.

6. Citation

Citation details will be updated upon public release.

6.1 License

Recommended for manifesto/preprint text: MIT (consistent with open, permissive dissemination). If you want “paper-like” sharing norms, you can add CC-BY later — but keeping MIT only is totally coherent here.