beacon

Beacon: Protect-Then-Select Attention Architecture

(MG-OS + GD-Attention Demo)

Beacon protect-then-select attention architecture

Related resources

Paper (preprint)
MG-OS: https://zenodo.org/records/17712891
GD-Attention: https://zenodo.org/records/16757311

Project page
https://www.ghostdriftresearch.com/

Repository
https://github.com/GhostDriftTheory/beacon

Minimal architecture demo of:

This repository visualizes a compact architectural contrast:

Softmax mixes.
MG-OS + GD-Attention protects at-risk minority-important signals and then selects.

Because Beacon intervenes in the protection and final selection of semantic candidates, it should be treated as a high-sensitivity architecture requiring careful evaluation and interpretation.


Core architecture path

Transformer-like attention logits
↓
MG-OS conditional barrier
↓
GD-style single selection

This demo is designed to make that path visible in a small, reproducible form.


What this updated demo shows

The app compares two routes on the same toy task.

1. Baseline: softmax attention

2. Proposed: MG-OS + GD-Attention


Main visible outputs

This app focuses on the following architecture-level differences:

The point is not only whether the final prediction changes, but how the internal competition changes before selection.


Representative case view

The app shows one representative case in detail.

Rather than displaying an arbitrary minority example, it preferentially displays a representative rescue case:

If no such case exists under the current setting, the app falls back to:

For that case, the app visualizes:

This is meant to make the protect-then-select mechanism directly observable.


Batch evaluation view

The app also evaluates the architecture over many samples.

It reports:

These metrics are intended to show not only whether the proposed route helps minority-important cases, but also whether it does so without blindly biasing every sample.


Why the conditional barrier matters

This demo does not implement the barrier as a permanent minority bias.

Instead, the barrier is intended to activate only when the minority-important candidate is:

This makes the demo closer to an architecture intervention than to a fixed class preference.


Ethical significance and careful handling

Beacon should not be treated as a mere performance-oriented attention variant.

The core issue in this architecture is not only how strongly candidates are weighted, but which candidates are protected, which candidates are allowed to collapse, and which candidate is ultimately granted representational priority. In other words, Beacon intervenes not only in numerical mixing, but in the structure of semantic competition itself.

This matters because the architecture explicitly introduces a protect-then-select path:

Once this kind of intervention is introduced, the architecture can no longer be understood only through surface-level metrics such as accuracy gains or efficiency changes. What must also be examined is:

For that reason, Beacon belongs to a technically and ethically sensitive category. It touches not only model behavior, but also broader questions of:

This does not mean that the repository claims to solve AI ethics, nor does it claim consciousness, agency, or moral status for current AI systems. The point is narrower and more concrete:

if an architecture explicitly intervenes in which semantic candidates survive and which one is selected, then that architecture should be evaluated and communicated with unusual care.

In practical terms, Beacon should therefore not be read as a simple “accuracy trick” or as a generic architectural tweak. Its significance lies in making internal selection structure more visible and more deliberate. That visibility is valuable, but it also creates design responsibility.

Accordingly, this repository is presented as a compact research demo for careful inspection of:

It is not presented as a ready-made high-stakes deployment recipe. Any future use in socially sensitive or safety-critical contexts would require case-specific evaluation, interpretability analysis, failure-mode study, and explicit responsibility design.


Presets included

The app includes three initial presets:

These presets are there so the architectural contrast is visible quickly, especially in the rescue-oriented regime.


Repository structure


Run locally

pip install -r requirements.txt
streamlit run app.py

Interpretation

This is a compact architecture demo, not a production model.

Its purpose is to make the following structural claim visible:

The intended contribution of the demo is therefore not large-scale benchmark performance, but a visible contrast between:


Scope

This repository does not include:

It is a compact visualization of an architecture idea.