Fusion is moving from “can it work?” to “can it ship?” That shift changes what success looks like. In a lab, it’s acceptable for knowledge to live in people’s heads, in scattered docs, or in a few heroic spreadsheets. In commercialization, it’s not. The closer fusion projects get to first-of-a-kind (FOAK) plants and repeatable builds, the more commissioning becomes the make-or-break phase—and commissioning lives or dies on how well you manage tests, issues, and turnover.
A digital commissioning system isn’t just a nicer way to store paperwork. For fusion, it’s the backbone that turns complex, highly coupled subsystems into a plant that can be started safely, tested repeatably, and handed over with confidence.
This post lays out what “one source of truth” actually means in a fusion context, why it matters, and what it should include.
Why fusion commissioning breaks the spreadsheet model
Fusion commissioning is a perfect storm:
- Deep interdependency: Vacuum, cryogenics, pulsed power, controls, shielding, cooling, diagnostics—everything touches everything.
- FOAK churn: Design changes happen late. Procedures evolve. Test sequences get rewritten mid-stream.
- Mixed stakeholders: OEMs, integrators, EPCs, owner-operators, regulators, and investors all need different slices of evidence.
- High consequence of confusion: A test run on the wrong configuration isn’t just wasted time—it can be unsafe, damage equipment, or invalidate months of data.
Spreadsheets can track a list. They can’t enforce readiness prerequisites, maintain configuration baselines, connect issues to retests, or prove that turnover packages represent what was actually built and tested.
Fusion needs a system that makes the commissioning truth discoverable and defensible.
What “one source of truth” means in commissioning
“One source of truth” is not “a shared drive.” It’s not “a dashboard.” It’s a living record that answers, reliably and quickly:
- What are we commissioning? (assets, subsystems, tags, boundaries)
- What tests prove it works? (procedures, acceptance criteria, results)
- What went wrong and what was done about it? (issues, root cause, corrective actions)
- What is ready to hand over, and what evidence supports that claim? (turnover packages)
In other words, it’s a system that ties work to evidence to acceptance.
If someone asks, “Is the cryoplant ready for integrated testing with the magnets?” you should be able to answer in minutes—with proof—not after a week of hunting through email threads.
The commissioning triad: tests, issues, and turnover (and why they must be linked)
Fusion commissioning has three core objects. If these don’t live in the same system—and link to each other—you’ll never have a coherent picture of progress or readiness.
1) Tests: the work and the proof
Tests aren’t just checkboxes. For fusion, a test has:
- Scope (asset/system boundary)
- Preconditions (configuration, calibration, training, permits, safety states)
- Procedure (steps, data capture requirements)
- Acceptance criteria (what “pass” means)
- Result (data, sign-offs, deviations, attachments)
A commissioning platform should treat tests as structured items with status and auditability—not as PDFs buried in folders.
2) Issues: the friction and the learning
Issues are inevitable in FOAK. What matters is whether they are:
- Logged immediately
- Tagged to the right asset/system
- Connected to the test(s) that surfaced them
- Tracked through corrective action and retest
- Closed with evidence
In a mature system, issues aren’t a “separate problem tracker.” They’re the mechanism that ties commissioning reality to engineering response.
3) Turnover: the moment commercialization begins
Turnover is where fusion stops being a project and becomes an operating plant. Turnover packages should be assembled automatically from the source-of-truth system—because they should represent:
- What was installed (as-built and tag completeness)
- What was tested (coverage and results)
- What was fixed (issue history and closure evidence)
- What the operator needs (O&M docs, training records, spares, procedures)
When turnover is manual, it becomes negotiation. When it’s data-driven, it becomes confidence.
What a digital commissioning system must support for fusion
Here’s the minimum “fusion-grade” set of capabilities that make one source of truth real.
A. Asset model that matches the plant (not the org chart)
Fusion plants are not organized like teams. They’re organized like systems.
You need a tag-based asset hierarchy that supports:
- Systems → subsystems → equipment → instruments → I/O
- Boundaries (what’s included in a test and what’s not)
- Ownership (OEM vs integrator vs owner)
- Versioning (what changed and when)
If your asset model is wrong, every downstream report is wrong.
B. Test readiness as a gate, not a hope
Fusion commissioning is filled with “almost ready” moments that waste days.
A digital system should enforce readiness prerequisites like:
- Calibration status is current
- Correct firmware/software version is installed
- Interlocks and permissives verified
- Safety permits approved
- Training completed for the test crew
- Required documents and drawings are attached and approved
A good readiness workflow prevents the “we showed up and couldn’t run” failure mode.
C. Structured evidence capture
Commissioning data must be usable later—for replication, licensing, and operations. That means:
- Standard data fields (not free-text chaos)
- Attachments tied to specific test steps
- Time stamps and sign-offs
- Automatic capture of configuration context (software versions, revisions, setpoints)
- Clear deviation handling (pass with exception, conditional pass, retest required)
Evidence without context becomes trivia.
D. Configuration and change control that respects FOAK reality
Fusion will change during commissioning. The goal isn’t to pretend it won’t—it’s to ensure test results remain valid.
Your system should make it easy to answer:
- What revision of the procedure was used?
- What configuration was installed at the time of the test?
- Did the acceptance criteria change?
- Which completed tests are invalidated by this change?
- Which systems must be retested?
This is where “one source of truth” becomes schedule protection.
E. Issue-to-retest loops
If an issue closes without a defined retest, you’ve created a paper success.
Digital commissioning should support:
- Issue generated directly from test step failure
- Automatic linking to the affected asset + test
- Required corrective action evidence
- Automatic creation of a retest task/checklist
- Closure only after retest passes
This reduces “hidden debt” that surfaces later during integrated ops.
F. Turnover packages generated, not assembled
Operators, regulators, and investors will all ask for proof that the plant is ready. Your system should generate turnover packages per system boundary that include:
- Asset completion (tags installed, verified, as-built linked)
- Test coverage (what’s done, what’s outstanding)
- Exceptions and waivers (with approvals)
- Issue history and closure evidence
- O&M documents and training sign-offs
- Cyber/security and controls documentation where relevant
If your team is manually building binders or PDF piles at the end, you’re paying interest on disorganization.
The commercialization payoff: why this matters beyond engineering
Digital commissioning isn’t a “nice-to-have” because it:
Reduces FOAK schedule slip
Most slips aren’t caused by a single technical failure—they’re caused by:
- waiting on prerequisites,
- missing documents,
- unclear ownership,
- retesting due to bad configuration control.
A single system makes blockers visible early and reduces rework.
Makes acceptance bankable
If you want project financing, partners, or grid contracts, you need credible claims about:
- completion,
- safety,
- performance,
- availability trajectory.
Those claims must be backed by traceable evidence.
Builds the repeatable plant
Fusion commercialization is ultimately a replication challenge. The first plant’s commissioning record becomes:
- your next plant’s checklist library,
- your training dataset for new crews,
- your baseline for design improvements,
- your evidence approach for licensing.
A messy FOAK record is a tax on every future build.
How to implement without boiling the ocean
Fusion teams often hear “digital transformation” and picture a two-year IT program. Don’t do that. Start with three steps:
1) Standardize the commissioning objects
Define your core data model:
- assets/tags,
- tests/checklists,
- issues,
- turnover packages,
- roles/approvals.
If the objects are consistent, the workflow can mature later.
2) Build a test library early
Even if procedures evolve, you want:
- consistent structure,
- consistent acceptance criteria,
- consistent data capture.
A test library becomes a replication asset.
3) Make turnover boundaries explicit
Pick system boundaries (e.g., “vacuum subsystem,” “cryogenics,” “pulsed power”) and define what “done” means for each boundary:
- required tests,
- required documents,
- required training,
- required exception handling.
Then let the system track completeness continuously.
What “good” looks like on a fusion project
If you’ve built a real source of truth, you can do things like:
- Pull a report showing integrated test readiness for a subsystem with prerequisite gaps highlighted.
- Click from a failed test step to the issue, the corrective action, and the retest evidence.
- Generate a turnover package that reflects actual as-built state, not a promised state.
- Reuse last month’s commissioning sequence on a new unit with minimal rework.
- Answer “why did we slip?” with data, not anecdotes.
That’s the difference between a project and a product.
Closing thought
Fusion commercialization isn’t only a physics problem. It’s an execution problem. And execution depends on whether teams can coordinate reality—across disciplines, vendors, and constant change—without losing the thread.
A digital commissioning system that unifies tests, issues, and turnover doesn’t just make commissioning smoother. It makes fusion buildable.
If fusion is going to scale, commissioning has to scale first.


