ghostdrift-adic-audit

Certificate-Based Auditing for Reproducible Drift Detection: An Empirical Study in Time-Series Forecasting

ghost-drift-audit-jp is an audit engine that fixes drift detection (distribution shift / regime shift) in operational time-series forecasting as a reproducible protocol. It outputs the split boundaries, threshold policies, input data identification, execution code, and runtime environment as a unified certificate, so that a third party can regenerate the same audit verdict (OK/NG) from the same inputs. In particular, estimation is restricted to the Calibration phase and the Test phase is used only for evaluation, structurally eliminating post-hoc threshold tuning (after-the-fact optimization) once results are observed. As a case study, we target electric power demand × weather time-series data (Jan–Apr 2024), generate certificates, ledgers, and evidence time series, and present the audit verdict as reproducible artifacts.


💎 Design Philosophy: From “Probabilistic” to “Accountable”

To address the “opaque inference” problem in conventional AI operations, this framework provides the following.

[!TIP] Audit-First Design Alongside running predictions, it automatically generates objectively verifiable evidence for third parties.

[!IMPORTANT] Tamper-evident Fingerprints It fixes hash fingerprints of input data and configuration parameters, making post-hoc modifications mathematically detectable.

[!NOTE] Verifiable Integrity Rather than mere statistical optimality, it makes visible the model’s faithful adherence to operational rules.


🛠 Technical Specifications

System Requirements

Project Structure

.
├── ghost_drift_audit_JP.py    # Core Logic & Audit Engine
├── electric_load_weather.csv  # Input: Weather (Synthetic)
├── power_usage.csv            # Input: Demand (Synthetic)
└── adic_out/                  # Output: Accountability Ledger

⚙️ Execution Profiles

Switch the strictness of the audit via AUDIT_CONFIG['PROFILE'].

Profile Use / Target Strictness Key Features
demo Smoke test / learning Low Prioritizes understanding behavior and evidence output
paper Research / reproducible experiments Mid Ensures computational reproducibility via fixed seeds
commercial Production / decision-making High Produces strict gate checks and a final verdict

How to Configure

AUDIT_CONFIG = {
  "PROFILE": "demo",  # "demo" | "paper" | "commercial"
}

🚀 Deployment & Usage

1. Setup

pip install numpy pandas matplotlib lightgbm

2. Prepare Data

Place the CSV files in the same directory as the .py.

[!CAUTION] The bundled CSVs are synthetic (dummy) data. They are for smoke testing only; for production use or research, use real data for which you hold the rights.

3. Run

python ghost_drift_audit_JP.py

4. Verification (adic_out/)


⚖️ Scope & Integrity (Non-claims)

🎯 Scope & Limits

🛡️ Threat Model (Tamper Detection)


📜 License & Acknowledgments

From “prediction” to “accountability.” Produced by GhostDrift Mathematical Institute (GMI)Official Website | Online Documentation