CCT Labs: A Reference Lab for Programmable Physics¶
We're a small, rigorous R&D lab building coherent field control infrastructure, with Space as our long-term focus.
Coherent field control infrastructure = phase-coherent actuation + synchronized sensing + low-latency feedback + calibration protocols + an energy ledgerâimplemented with explicit actuator limits (delay and bandwidth/low-pass response) and explicit measurement limits (finite-shot noise and defined averaging)âpackaged as a repeatable methodology and reference devices.
Our theoretical foundation is the Continuum Computation Thesis (CCT)âa framework for understanding how physical systems trade bandwidth, coherence, and energy. We develop the theory, run the simulations, and build the hardware that turns coherent control from conjecture into engineering capability.
Near-term, CCT Labs functions first as a reference lab: the immediate output is validated methodology, reference devices, and reproducible benchmarks. Application narratives remain downstream of that validation.
AI and biology serve as calibration environments for the same measurement-and-control stack; space is the primary long-horizon destination of the program.
What we're building in 12 months:
- A photonic measurement stack â a photonic reference bench (RFH-QF) to reproduce discrete response bands under declared tolerances plus a Hybrid MZI displaced-countingâhomodyne sweep to calibrate the observer slider and discreteness metrics under declared confounders.
- An RF/EM field-control bench (RFH-PL) â geometry/basin validation first, controller-selection second, under declared actuation limits (delay and bandwidth/low-pass response), estimator/noise models, and matched-resource comparisons.
- A VOâ coherent-vs-thermal benchmark â a first material-control hardware test asking whether coherent optical driving buys more task control per joule than thermal equilibrium under a full ledger.
- A first hardware RFH + Prog_T ledger â one declared measurement stack (Lens Tools v1) with latency/bandwidth constraints, finite-shot variance, calibration rules, and benchmark comparisons carried across benches.
The Challenge¶
Spaceflight remains expensive because payload fraction falls exponentially with Îv (rocket equation), making missions propellant-limited and hard to reuse. To change this, we need field-based external infrastructure that can reduce reliance on onboard propellant and structural mass.
AI faces a similar physics bottleneck: frontier training is energy- and capex-intensive. Both domains hit the same wall: brute-force control is expensive.
The common issue: we are fighting physics instead of surfing itâspending energy to overpower dynamics instead of steering their native coherence.
Where we fit¶
CCT Labs turns a feedback-based physics framework into a reproducible engineering methodology. We function first as a reference lab for programmable physical mediaâsubstrates whose native dynamics can be steered, measured, and benchmarked under declared constraints.
We've developed two operational quantities we use to score substrates:
-
RFH (α): Exponent linking estimation error to measurement bandwidth (defined as information-throughput, e.g., FI/sec or monotone proxy) for a declared estimator, noise model, and bandwidth definition. Coherent regimes approach αâ1; the 'Theorem 8 Floor' of incoherent averaging yields αâ0.5. This value represents the limit of uncoordinated measurement where resolution is capped by back-action-limited noise. CCT Labs uses this as the baseline benchmark: any system realizing αâ0.5 is treated as a passive, 'noisy' substrate. Success in Phase 1â3 requires navigating the system out of this Theorem 8 regime and into a coherent integration regime where α interpolates toward 1.0, signifying that structured driving has successfully suppressed back-action costs. This log-log scaling is called the Bandwidth-Quantization Law (BQL). RFH treats instruments and controllers as finite-bandwidth "compilers" whose limits shape observed discreteness.
-
Prog_T: Intentional steering (from control inputs to task outcome) per joule, under a full energy ledger, closed-loop constraints, and defined time horizon. Coherent regimes typically allow higher Prog_T than incoherent regimes.
Baselines for Prog_T comparisons: - Formation Control: brute-force = direct mechanical actuation (e.g., voice-coil / stage control) or passive drift matched for the same stability target. - VOâ Phase Switching: thermal = resistive heating to trigger insulator-metal transition at equilibrium.
We will publish the exact coherence functional and estimators in Lens Tools v1.
The Engineering Bet¶
We're building coherent-field-control methodology for high-programmability regimes. We run a strict Validation Loop: pre-registered predictions â simulation â hardware replication (declared tolerances + stop conditions; publish negatives).
Simulation is treated as a control-and-measurement dress rehearsal: candidate controllers must remain effective under declared delay/bandwidth limits and finite-shot noise, and must carry across conditions before they become hardware targets.
Practically, this means we design and pre-register controller families (waveform shape + timing), estimator regimes (shot budgets and holdout/generalization checks), and calibration logic (when to do 1-point vs 2-point) in simulation before we build hardware around them.
These default controller and estimator choices are motivated by the internal toy-world controllability addendum in Appendix C §11.12.
Phase 1-2 results are valuable on their own; later phases are explicitly gated by replication.
One stack, two domains: We're building one measurement-and-control stack that validates both AI and Space applications. Year 1 is de-risking; if it validates, Space is the long game.
The Insight Chain¶
CCT is built on a small number of testable claims. Each builds on the last:
| # | Claim | What It Means | Test |
|---|---|---|---|
| 1 | Observed discreteness can emerge from finite bandwidth | Measurement limits can induce apparent quantization | RFH shows regime-local scaling/bands + observer-slider transition (countingâphase-sensitive) under pre-registered confounder controls. |
| 2 | Coherence/estimation scaling transfers across domains | The same principles apply from photonics to biology when the constraint class is declared (actuation limits and measurement limits) | RFH + estimator behavior across â„3 domains under matched constraint classes |
| 3 | Coherence can be increased via structured driving | Systems can be driven from incoherent to coherent regimes | Regime switching replicated in hardware |
| 4 | Coherent driving improves task control per joule | Maximum effect per joule comes from coherent driving | Prog_T(coherent) > Prog_T(thermal) under full energy ledger |
Claims 1-4 are testable in Year 1. We test progressively deeper regimes; later phases are gated by earlier replication.
Exploratory (gated by Phases 1-3): Extreme coherence may perturb effective propagation metrics. Test: blinded, pre-registered ToF/phase residual search after Phases 1-3.
CCT is presented across three epistemic layers (model theorems â engineering regime â ontology); this grant funds Layer 2 (engineering) validation. Layer 3 remains downstream: it treats known physics as a set of high-stability effective regimes to be explained, while Year-1 benches validate the measurement-and-control layer.
Read: cct-philosophical.md (ontology) and cct-scientific.md (scientific).
So far¶
- We've fit RFH exponents across heterogeneous data (LIGO, cameras, radar, ECG, pulsars (exploratory fits; estimator details to be published in Lens Tools), bioelectric regeneration, plus newer pilots in paleomagnetic excursions, economic time series, and a small-source quantum-optics sweep) as exploratory calibration/workflow checks to map regimes and estimator behavior. For bioelectric systems, a 9-level synthetic gap junction sweep yields α = 0.35 ± 0.02 (sub-incoherent regime). Recent pilots add concrete portability checks: an economic aggregation-distortion pipeline yields \(\alpha_{\text{RV}} \approx 0.52\) stably across BTC/ETH (with some tail-based distortions falling below-band), paleomagnetic excursions show positive scaling, and a small-source quantum-optics dataset (DS3) yields mid-band \(\alpha \sim 0.47\text{â}0.53\) under a proxy \(B\) definition with fixed RBW. These primarily validate portability and falsifier hygiene (declared \(B,\Delta\), uncertainty handling, and sensitivity checks).
- We've built simulations of analog "horizon" devices that show discrete stable response bands and a high-gain regime, reaching â4.9Ă response gain at ~88% coherence, where response gain and programmability per joule peak.
- We've validated mode-selective coherent control in lattice simulations ("Cold Melt"), demonstrating regime switching: baseline lattice dynamics (thermal, incoherent) â resonant coherent driving â mode-selective coherent response. This shows ~3Ă Prog_T advantage (constant-factor gain; scaling class unchanged) and validates the core claim that coherence is programmable via field structure, not a fixed material property.
- Our bench-facing simulation program has since narrowed the active lab stack:
Hybrid MZIis the clearest measurement-regime bench;RF/EMcurrently supports geometry/basin validation more strongly than a broad controller-superiority claim;VOâis the strongest current material-control branch;YBCOis gated;- and
Phase 4is a later metrology branch rather than an immediate lead claim.
The next step is to turn those specific adjudication benches into validated lab hardware and a general methodology that others can use.
Impact Targets¶
Space â Coherent Field Control at Increasing Depth¶
We pursue space systems via programmable field coherence (measurement + feedback + actuation) that shifts capability from onboard propellant to external infrastructure (beams/fields, timing, sensing, control). The objective is not new fundamental physics; any apparent "gain" is treated as power routing / field focusing relative to baseline configurations, not energy creation.
The Core Bet: Coherent fields give you more control per joule than incoherent energy. We validate this at increasing depth:
The near-term CCT Labs bench stack is narrower than the long-horizon story:
- photonic measurement-regime benching (
Hybrid MZI); - photonic substrate / band-structure benching (RFH-QF reference bench);
- RF/EM field-control benching;
- and
VOâas the first material-control hardware branch.
| Phase | Level | Experiment | Success Metric |
|---|---|---|---|
| 1 | Field | RF/EM Field-Control Bench | RFH α in [0.9, 1.1], stable field geometry under closed-loop phase control (report knees/bands if present). |
| 2 | Matter | VOâ Insulator-Metal Transition | Prog_T(coherent) > Prog_T(thermal) |
| 3 | Quantum | YBCO Superconductor Tc Tuning | Prog_T ratio > 1.5Ă (stretch goal) |
| 4 | Metric | ToF/Phase Anomaly Detection | Reproducible anomaly > 1Ï (Year 2+) |
We've already de-risked candidate operating points in simulation. One photonic target for the RFH-QF reference bench is the old
â0.32/â4.9Ă/â88%Golden Config, with a high-fidelity backup nearâ0.28. That operating point still matters, but it is only one branch of the Year-1 program, not the sole definition of success.
Phase 1: Formation Control. We use electromagnetic (RF/EM) standing-wave fields and closed-loop phase control to create field-shaped potential wells that stabilize test masses on a low-friction stage (air-bearing or pendulum). We quantify performance with RFH α (target: [0.9, 1.1]) and a Prog_T energy ledger. EM field shaping is the natural bridge from tabletop validation to macroscopic actuation, because the same synchronization + feedback primitives scale to phased-array and distributed-field architectures. This bench is not only a stabilization demo; it is a pre-declared geometry/basin validation test first and a controller-selection test second. CCT does not count success here as a broad waveform/controller win unless that survives matched-resource holdouts on hardware after the basin story itself is validated.
Phase 2: Material Control. We extend the same methodology to drive a phase transition (VOâ insulator-metal) with less energy than thermal equilibrium. This validates CCT at the material levelâcoherent fields are more efficient than heat.
Phase 3 (Stretch): Quantum Materials. If Phase 2 succeeds, we attempt the same on a superconductor (YBCO), probing whether coherent control extends to quantum phase transitions.
Phase 4 (Year 2+): Metric Exploration. If Phases 1-3 validate CCT methodology, we probe whether extreme coherent control produces detectable ToF (time-of-flight) or phase anomaliesâdeviations from baseline propagation predictions. Current reduced-order de-risking now supports both delay and phase residuals under a declared systematic ledger, with hardware still required to validate either channel under strict ON/OFF null controls.
Each phase builds on the last. We don't claim Phase N+1 until Phase N is validated.
Why This Works: Field Geometry as Structure
We've identified specific field configurations that provide structural control at lower energy than mechanical alternatives. The physics is standard (coherent interference, standing waves); the engineering insight is unique in which configurations work and how to stabilize them. Field geometry replaces structural mass.
This isn't exotic physics. Optical tweezers trap particles the same way; we're scaling it to macroscopic formation control and asking whether the same principle extends to effective propagation (Phase 4).
RFH and Prog_T as Engineering Tools
These metrics aren't just validation criteriaâthey're engineering tools for scale-up:
| Metric | What It Tells You |
|---|---|
| RFH α | "Am I still in the coherent regime?" â if α drops toward 0.5, you're losing coherence |
| Prog_T | "How much control do I get per joule?" â your energy budget for a given mission outcome |
As you scale from lab bench to formation control to larger distances, RFH and Prog_T track whether the physics still holds. They're the gauges that tell you "this will work at scale" or "this caps out here."
For the photonic observer-slider bench, success is treated differently: it is a purpose-built measurement-regime test, and it succeeds only if the record type shifts reproducibly as the observer mode is swept under fixed source conditions and declared confounder controls.
AI â Calibration Domain¶
Coherent control applies to analog computing (thermodynamic co-processors), where physical relaxation dynamics buy compute. The same RFH/Prog_T metrics that score space substrates also score AI substratesâmaking AI a calibration domain for our methodology.
AI is a calibration domain for CCT Labs, not a bid to become another alternative-compute company.
Year-1 scope (AI): Validate whether a candidate analog substrate supports (a) reproducible RFH scaling, and (b) measurable Prog_T under full energy accounting. If validated, the path is licensing reference devices and methodology to hardware partners, not vertical integration.
Biology â Partner-Led Calibration¶
We provide RFH/Prog_T analysis tools to partner labs studying bioelectric regeneration (planaria, Xenopus, organoids). This is not a core deliverableâmath and tools only.
A Note on "Coherence"¶
Throughout, "coherence" means repeatable, phase-consistent responseâthe same inputs produce statistically consistent outputs across repeated trials. The exact coherence functional will be finalized and published as part of the CCT scientific methodology.
Known Gaps This Grant Resolves (De-risking Deliverables)¶
This is a de-risking program. Four items must be resolved before scaling claims:
- Coherence functional: replace simulation proxy language with a published operational definition and a hardware-measurable equivalent.
- Prog_T realism: report Prog_T with a subsystem energy ledger (actuation / measurement / I/O) and task-relevant latency/stability constraints.
- AI wedge: choose a first benchmark class and define the readout/scoring protocol and baselines in advance (pre-registered), enabling apples-to-apples comparisons.
- RFH standardization: default (\(B :=\)) information-rate and pre-registered discreteness metrics stable across measurement modes.
12-month Track (seeking a $250k grant)¶
This program buys one thing: bench replication of pre-registered predictions. We already have candidate geometries and simulation campaigns; funding converts them into measured hardware with published protocols, tolerance bands, and an energy ledgerâturning CCT from "promising" into "reproducible."
The grantee will have an option to invest into the first priced equity round at an agreed percentage, if the results justify scaling into more ambitious hardware and applications.
Turn the CCT framework (including RFH + Prog_T) from a promising theory + simulation stack into a validated methodology plus a reference device.
- A simulation-to-hardware pipeline with at least one pre-registered prediction test completed (taking a pre-registered horizon simulation prediction into real optics).
- Design and partial realization of a photonic reference bench, reproducing predicted band structure and quantized-filter behavior in real optics (within declared tolerances), in collaboration with photonics/quantum optics facilities.
- RF/EM Field-Control Bench (Space Track): A tabletop demonstrator using RF/EM phase-synchronized emitters for active coherent field stabilization. Success criteria: (a) stable field-shaped potential geometry under closed-loop phase control (α in [0.9, 1.1]), (b) measurable Prog_T under feedback. This validates the core "Formation Control" capability before scaling.
- First programmability-per-joule (Prog_T) measurements in hardware, compared against simulation using pre-declared topological/coherence observables (e.g., band counts and coherence metrics) and uncertainty bands.
- VOâ Coherent Phase Switching (Phase 2): Demonstrate that coherent optical driving triggers the insulator-metal transition with higher Prog_T than thermal equilibrium. Success criteria: Prog_T(coherent) > Prog_T(thermal) by >1.5Ă. This validates CCT at the material level.
- YBCO Coupling Experiment Design (Phase 3, stretch): If VOâ succeeds, design the cryogenic experiment for YBCO Tc tuning (months 9-12, contingent on Phase 2 validation).
- A public software toolkit (âLens Toolsâ) for RFH/Prog_T analysis and benchmarking on external systems (and datasets)âplus a reference implementation and calibration protocols provided via our lab hardware kits / partner builds (so results stay reproducible, not âworks on my machineâ).
- Initial analysis on existing bioelectric regeneration datasets in collaboration with biology labs, establishing biology as a third, multi-scale testbedâwith no in-house wet-lab build-out.
Collaborations & Enablers¶
- High-intensity theory and simulation work to map promising regimes and prioritize what to validate in hardware next.
- Access to photonics/quantum optics facilities for bench builds and calibration (equipment sharing, not deliverable partnerships).
- Collaborations with developmental biology labs for data access and applying RFH/Prog_T to regeneration datasetsâsince we provide math + tools only.
Public outputs¶
- A validated methodology and initial reference hardware (TRL ~3â4).
- Open tools and a public evidence base across at least three domains (physics, engineering, biology).
Budget use ($250k)¶
- Photonics bench parts & metrology time â optics, mounts, detectors, measurement access
- Fabrication & fixtures â custom components, air-bearing stage or pendulum setup
- Measurement hardware â phase-locking electronics, feedback controller, DAQ
- Compute â simulation runs, data storage, analysis infrastructure
- Personnel time â researcher salary, experiment execution, analysis
- Pre-registration, publication & open-source â protocol documentation, Lens Tools release
This 12-month program establishes the reference device and evidence base needed to scale into more ambitious hardware and applications.
Risks & Decision Gates¶
These are explicit criteria for when we should iterate, narrow the claim, or reallocate effortâso we donât over-interpret early results.
- RFH doesnât replicate (in a declared regime): If regime-local exponents/bands fail to reproduce in pre-registered tests on new datasets/hardware, we treat that as evidence a specific regime claim is wrong. We tighten the estimator/regime definition and rerun; if it still fails, we drop that claim and publish the negative result.
- Predicted configuration doesn't translate to hardware: If the bench can't reproduce the predicted band structure within declared tolerances, we update the model/bench and iterate. If it remains unreproducible, we classify it as a simulation artifact and move the pipeline to the next candidate geometry/substrate.
- No measurable uplift in early targets: If programmabilityâperâjoule measurements and task benchmarks donât beat strong baselines, we donât scale the application narrative yet. We narrow to the problem classes where a substrate shows a measurable advantage and keep the rest as longer-horizon.
- Space doesnât scale under feedback: If measured coherence/band structure and phase/ToF observables fail to improve under increased control bandwidth (or degrade under closed-loop operation), we narrow the space narrative to the regimes where infrastructure demonstrably improves energy routing and control, and keep longer-horizon tracks behind hardware gates.
Roadmap (After the 12âmonth program)¶
Months 12â24 â Replication and scaling¶
- Harden the photonic reference bench into a repeatable reference device (repeatability, calibration, documented tolerances).
- Expand programmability-per-joule measurements beyond the first device (multiple substrates/architectures; head-to-head comparisons).
- Grow Lens Tools into a reproducible pipeline with benchmark datasets and protocols others can runâand a certified âreference device + measurement stackâ path for teams who want apples-to-apples hardware comparisons.
2â5 years â Application prototypes¶
- AI: Targeted analog co-processor prototypes with measurable energy gains on selected tasks.
- Space: Phase 1 (Field Control) â Phase 2 (VOâ Material Control) â Phase 3 (YBCO Quantum Control), progressing toward infrastructure-first mission prototypes with partners. Each phase builds on validated results from the prior phase.
- Bio: Partner-lab pre-registered studies in morphogenesis/regeneration using the same scoring lens.
Output: A pipeline that moves from theory â simulation â reference devices â application prototypes.
Why CCT Labs (Counterfactual Value)¶
What's missing without CCT? - No unifying framework: coherence phenomena are studied in silos (photonics, superconductors, biology) without shared metrics or theory - No cross-domain transfer: insights from LIGO don't inform bioelectric research; analog computing doesn't connect to formation control - No clear "north star": no way to know if a discovery in one domain applies elsewhere
What's missing without CCT Labs? - Methodology stays academic: RFH and Prog_T remain journal metrics, not engineering tools - No hardware validation: simulations don't prove anything until bench-replicated - No reference devices: each group builds from scratch instead of using validated specifications - Space application stays speculative: coherent field control for formation/propulsion remains a paper concept
What CCT Labs provides: - A single measurement stack (RFH, Prog_T) that works across domains - A validation loop that turns theory into hardware - Reference devices and methodologies others can license or replicate - A bridge from "interesting physics" to "space capability"
Why Now, and Why a Lab?¶
Why now
- We already have candidate geometries, metrics, and pre-registered simulation predictions; the bottleneck is bench replication.
- Photonics tooling and measurement infrastructure are now accessible enough to run tight replication loops quickly.
- Energy/control constraints are becoming first-order in AI and space, making bits-per-joule advantages strategically valuable.
- For infrastructure-first space systems, a growing part of the bottleneck is metrology and control: precision photonics, timing, and feedback systems are finally strong enough to prototype these approaches on Earth and scale outward.
[The enabling stack for infrastructure-first space systems is already realâDSOC deep-space optical links, power beaming studies, SBSP assessmentsâso the remaining question is where coherence + feedback yield outsized leverage per joule.]
Why a dedicated lab
- The validation loop breaks if theory, simulation, and bench work are split across organizations: iteration slows and assumptions drift.
- A focused lab can run the loop end-to-end (model â pre-register â build â measure â update) and produce reference devices + tools others can adopt.
- The intent is a BellâLabsâstyle pipeline: steady principles, reference devices, and spin-outs grounded in one control framework.
Our aspiration is space. If programmable field coherence validates, space is the long-horizon application domain with the highest leverage for this lab. AI and Bio provide calibration, validation, and near-term partnerships; space remains the downstream focus once hardware validates.