One of the most frequent questions you’ll hear—often from well-meaning CFOs or project sponsors—is this: “What’s the ROI of this platform?” It’s a fair question. It’s also a tricky one.
The uncomfortable truth is that foundational investments rarely yield immediate, measurable returns. You won’t get a line in your financial dashboard that says “+5% ROI from the platform’s context model.” What you’re actually investing in is reduced risk, faster project delivery, and the ability to reuse knowledge and insights across the organisation. These are enablers—powerful ones—but their value becomes visible only over time and across multiple use cases.
Most organisations respond to ROI pressure by creating elaborate KPI dashboards. Dozens of metrics, colour-coded status indicators, trend lines that always slope upward. Let’s be honest: we’re not believers in made-up numbers. The metrics that get tracked are rarely the ones that matter. They’re selected because they’re easy to measure, not because they’re meaningful.
Vanity Metrics
Here are the metrics that make slide decks look impressive but tell you nothing about whether you’re winning:
- Number of dashboards created: Says nothing about whether anyone uses them.
- Number of sensors connected: More data doesn’t mean better insights. You can have a thousand sensors streaming meaningless noise or fifty sensors delivering critical context. Which would you rather have?
- Amount of data stored (terabytes): Storage is cheap. Valuable data is hard. Celebrating storage volume is like celebrating how many filing cabinets you own instead of what’s in them.
- Number of projects completed: Completion without impact is just activity. Finishing ten projects that deliver no value isn’t achievement—it’s waste disguised as progress.
These vanity metrics create false confidence. Leadership sees growth. Teams celebrate hitting targets. Meanwhile, the platform might be dying underneath—technical debt accumulating, users frustrated, real value stagnant. Everyone’s looking at the scoreboard while the game is being lost.
The problem isn’t measurement itself. It’s measuring the wrong things. If you’re going to track metrics at all, track the few that actually tell you whether your platform disciplines are working.
The Few Signals That Actually Matter
Instead of elaborate KPI frameworks, watch four indicators. These are harder to game, they measure outcomes rather than activity, and critically, they tell you whether the five disciplines from Section 9.2 are working in practice.
1. Scale Velocity
Measure time from pilot (Site 1) to second site, third site, nth site. Healthy scaling shows decreasing deployment time. Site 5 should take a fraction of the time Site 1 required. If it doesn’t, you’re not scaling—you’re replicating.
This metric is brutally honest. It tells you whether your standards (Discipline 2) are actually enabling speed or just creating paperwork. It reveals whether your continuous gap assessment (Discipline 3) is improving the process or just going through the motions. Most importantly, it shows whether learning is compounding or whether every site starts from scratch.
Good trajectory: Site 1 takes six months. Site 2 takes three months. Site 3 takes six weeks. Site 5 takes two weeks. That’s the platform effect in action.
Bad trajectory: Site 1 takes six months. Site 2 takes five months. Site 3 takes six months again. Site 5 takes seven months. You haven’t built a platform—you’ve created a project template that nobody follows.
2. Reuse Ratio
What percentage of new projects leverage existing context, models, or connectors? If every project rebuilds asset hierarchies, you’re not scaling—you’re duplicating effort.
This metric directly measures whether Discipline 2 (Standards That Scale) is delivering. Are teams actually using your templated asset models, or are they building their own because the templates don’t fit? Are your standard connectors covering 80% of use cases, or does everyone need custom development?
Reuse ratio also reveals discoverability problems. Sometimes teams rebuild from scratch not because templates don’t exist, but because they don’t know they exist. If reuse is low despite good templates, you have a documentation or communication problem.
3. Sustainability Indicator
Are early use cases still running, or did they become abandonware? This is the most uncomfortable metric to track because it forces you to confront failure.
Go back to your first five projects. Are they still in use? Still maintained? Still delivering value? If not, why? Understanding failure modes prevents repeating them.
Common failure patterns:
- Built for demo, not operation: Looked great in the pilot but couldn’t handle production edge cases
- Key person dependency: Only the original developer understood how it worked; when they left, it died
- Technical debt compounded: Quick hacks became unmaintainable spaghetti
- User needs evolved: What solved the problem in 2022 doesn’t solve it in 2025, and nobody maintained it
Sustainability directly measures whether you’re managing technical debt (from 9.1) or ignoring it. If half your early projects are dead within two years, something’s wrong with your approach to building lasting solutions.
4. Self-Service Adoption
What percentage of dashboards and reports are created without the central team’s involvement? Early on, this will be low—maybe 10-20%. As you mature, it should grow towards 50-70%. If it stays low after two years, you haven’t enabled self-service; you’ve just created a faster request queue.
This metric directly measures whether you’re moving from Pattern 5 (central bottleneck) to Pattern 7 (distributed capability).
Good trajectory: Year 1, platform team creates 90% of dashboards. Year 2, they create 60%. Year 3, they create 30%. The team’s capacity hasn’t grown, but output has tripled because users are creating their own solutions.
Bad trajectory: Year 1, platform team creates 90% of dashboards. Year 2, they create 85%. Year 3, they create 90% again because the backlog exploded and they gave up on enablement to fight fires. You’ve scaled the bottleneck, not solved it.
So how do you talk to leadership about platform value when you can’t promise specific ROI? How do you secure investment when you’re honest about uncertainty? The answer isn’t elaborate financial models—it’s better framing.
The Utility Analogy
Rather than treating the platform as a one-time project with a single ROI calculation, think of it as a digital utility. Like electricity or water, it powers everything else. And just like those utilities, the costs are ongoing—but so are the benefits.
Trying to calculate ROI per use case is like calculating ROI per email sent. Imagine justifying your company’s email server by tallying up the value of every message: “This email about the quarterly review generated $47 in efficiency savings. This email about the team lunch generated $0. Overall ROI: 12.4%.” It’s absurd. And yet, that’s how many organisations approach platform investments.
The better question isn’t “What’s the ROI of email?” It’s “Can we operate effectively without it?” The same applies to your data platform. Once you reach a certain scale of digital operations, the platform isn’t optional. It’s infrastructure. The question becomes: “Do we build this infrastructure well, or do we cobble it together and pay the price later?”
Layered Investment Strategy
The pressure to “show something” early is real. CFOs don’t like funding multi-year initiatives with vague promises. The solution isn’t fake precision—it’s layering your investments to manage risk and demonstrate value progressively.
Phase 1: Secure Executive Commitment for the Foundation
Start by making the case for the foundational layer: storage, integration, connectivity. Position this as a precondition for almost any modern initiative—whether that’s AI, energy optimisation, predictive maintenance, or regulatory compliance.
Don’t promise specific ROI at this stage. Promise enablement. Be explicit: “This foundation won’t directly show up as cost savings in Q2. What it will do is make every future initiative 3x faster to deliver and 10x easier to scale.”
Use the analogy: You’re not building a house; you’re building the foundation that makes the house possible. Nobody gets excited about concrete and rebar, but without them, nothing else stands.
Phase 2: Build Momentum Through High-Visibility Wins
Once the foundation exists, pick use cases that solve real pain and show value quickly. These aren’t fake demos—they’re operational improvements people can feel.
Examples of high-visibility wins:
- Reducing troubleshooting time by making alarm context immediately visible. When a packaging line stops, operators see not just “Motor 4 fault” but the full context: maintenance history, recent parameter changes, similar events on other lines. Downtime that used to take 30 minutes to diagnose now takes 5 minutes. On a line producing $10,000/hour, that’s real money.
- Enabling a reliability engineer to create their own vibration analysis dashboard in an afternoon instead of waiting three weeks for the platform team. That engineer now solves problems faster—and the platform team’s backlog shrinks.
These wins manage expectations and prove you’re not just pouring money into infrastructure. They show the platform working.
Phase 3: Let Compounding Effects Speak
As your reuse ratio grows and scale velocity improves, time-to-value shrinks. The second use case is faster than the first. The fifth is faster than the second. This trajectory tells the story better than any ROI calculation.
Show leadership the trend: “First dashboard took six weeks. Fifth took three days. That’s the platform effect. We’re not working harder—we’re working on top of what already exists.”
This is where the four metrics from Section 9.3 become your communication tool. You’re not promising imaginary returns. You’re showing real, measurable acceleration.
Running Costs Are Real (Don’t Hide Them)
Transparency builds trust. Don’t pretend the platform is a one-time investment. Acknowledge ongoing costs upfront:
People: Whether internal or through partners, you need capacity to manage and support the platform. This isn’t optional overhead—it’s operational necessity. Be explicit about what this team does and why they’re needed.
Licence fees: Understand your vendor’s pricing model. Is it based on users, data volume, storage, compute? These costs can scale quickly. Know the numbers before you commit.
Infrastructure costs: Especially in the cloud, storage, compute, and data transfer costs can grow faster than expected. Monitor this closely. We’ve seen organisations hit with surprise bills because they didn’t architect for cost efficiency.
Cybersecurity requirements: Industrial environments demand robust security. This costs money—audit costs, compliance costs, security tooling, regular patching and updates. Budget for it.
Support contracts and training: Even with open-source platforms, you’ll need support contracts, training, and operational muscle. “Free” software still requires investment.
What leadership needs to understand: The platform isn’t a project that finishes. It’s capability that compounds. The first year feels expensive because you’re building foundations. The third year feels like leverage because every new initiative builds on what exists. If your management team understands this, you’re halfway there.
The Honest Conversation
The best approach is transparency about what you know and what you don’t.
Here’s what that conversation sounds like:
“Can I promise 300% ROI in 18 months? No. Can I show you that Site 5 will deploy in one-tenth the time Site 1 took? Yes.
Can I prove every pound spent maps to a pound saved? No. Can I show you we’re eliminating weeks of manual work per use case? Yes.
Can I guarantee this will solve every digital problem we face? No. Can I show you that without this foundation, every future initiative will be slower, more expensive, and less likely to scale? Yes.”
Leadership respects honesty more than fake precision. If they demand elaborate ROI models with decimal-point accuracy before approving foundational investment, you’re probably in the wrong organisation. The companies that succeed at digital transformation are the ones that understand infrastructure investment requires some faith—backed by evidence of progressive value delivery, not fantasy spreadsheets.
The metrics that matter—scale velocity, reuse ratio, sustainability, self-service adoption—give you that evidence. They show momentum. They show compounding. They show whether the disciplines are working. That’s all the precision you need.
