Join the Insider! Subscribe today to receive our weekly insights
Willem (00:00)
Welcome, you’re listening to the IT/OT Insider Podcast. I’m your host Willem and in this special series on industrial data ops, I’m joined by my co-author David and John Younes from Litmus. If you want to hear more about what’s shaping the world of industrial data and AI, don’t forget to subscribe to this podcast and our blog. John, welcome.
John Younes (00:20)
Yeah, thank you Willem and David, thanks for having me.
Willem (00:24)
Hey David!
David (00:25)
Yeah, thanks, Willem. Yeah, John, also great having you. I want to start with a quick personal side note. Litmus and the IT/OT insider actually go back for a very long time. You’ve been one of, I wouldn’t say first, but one of the first readers we had on the blog. So a big thank you to the Litmus team. And yeah, John, he is the co-founder and COO at Litmus. John has a connection with Belgium, where I live.
So hopefully we don’t start talking about beer, our amazing beers and our amazing chocolates. But we really should stick this conversation to industrial data and data ops, So John, why don’t you start this episode with a quick introduction from your side and tell us what Litmus has to offer to our beloved industrial world.
John Younes (00:59)
Is it
Yeah, for sure. Yeah, I know it’s been great following you guys as you’ve been growing, of course. And it’s nice to see people that understand the space and have a nice focus on it because I don’t think it gets as much attention as it should. And it’s a very growing market, I think. So it’s great to.
to be able to read the content you guys put out and collaborate together. So looking forward to the discussion. Just to, guess, a quick background. Yes, as you mentioned, I’m one of the co-founders and COO of Litmus. So Litmus, we’ve been around for about close to 10 years. We’re deployed in thousands of sites around the world as really the industrial data foundation for manufacturers to be able to.
basically leverage industrial data at scale for their, to better improve their operations and really democratize the, the access of data across the organization, whether it’s all the way down to the shop floor, to, to the top floor, to, business analyst teams or supply chain or data science or application developers that, that want to utilize the operations data, in a, in a repeatable way on top of the same common infrastructure. So that’s really at a high level. What, what we’re.
what our mission is to really democratize that access and usability of data across the organization for manufacturers. So in terms of what we offer, we provide industrial data ops platform. So we have basically as part of that, it’s a suite of products that we have, really everything that we do at the core.
starts with our Litmus Edge product. So it’s really our core product. And essentially what it is, it’s an Edge data platform, essentially. You deploy it locally within the factory. And then from there, we have the ability to connect to any type of equipment that you have on the field or in the plant itself. So what we’ve done is a lot of our core IP lies in actually this connectivity layer.
We have the highest amount of drivers and connectors to connect to pretty much any kind of equipment. Most equipment that’s out there, I’d say 85-90%. Obviously, you always run across some obscure or custom type of systems, but for the most part, Litmus really does have that wide breadth of connectivity. So we can collect that data, we normalize it into all the common structure and format. And then from there, we do…
David (03:37)
Yeah.
John Younes (03:55)
we do store that data locally. So we allow the ability to do something with that data on-prem at the edge. So if you want to be able to actually process that data, so whether that’s filter data, set up any kind of rules or alerts around data. And then we have an analytical workflow engine. So now you’re collecting not just either whether it’s raw data or normalized data or processed data, but now you can actually have analyzed data. So actually calculating on top of that so you can leverage things like
whether it’s averages, standard deviations, those kinds of statistical functions on top of over 25 different manufacturing KPIs, things like OEE, asset utilization and others. then be able to then integrate data, sorry, before that actually, then we have a data modeling contextualization layer. So you can actually model the data, contextualize it, and then make, and essentially make that data prepared and usable for.
next level systems where you want to leverage those, whether it’s in the plant or in the enterprise. So we built as well, a number of various connectors to be able to send data to, whether it’s various clouds, Databricks, Snowflake, any kind of various enterprise systems where companies want to use the data. And then for us, it’s not, we really saw that there’s a big challenge as well in kind of scaling these kinds of deployments and use cases. So it’s not about
rebuilding and building on top each plant that you go to and vertically scaling. You need to be able to scale this across multiple sites. So we built basically a management layer, which sits more on the cloud or enterprise side. The customer hosts it themselves as well. We don’t have a SaaS offering, but it’s called the Edge Manager. And that allows customers to be able to basically centrally manage all their Libmas edges. So if you want to do things like mass scaling of edge devices,
create templatizer deployments, templatizer data models and workflows. So it’s really meant for that kind of mass scalability, push your applications out to the edge, do over the air updates, those kinds of things. So it doesn’t become a static system where you have to kind of send somebody out every time you want to make a change, which I think is what everyone’s quite used to and I think pretty tired of. So that doesn’t really work in the realm of enterprise IoT.
or whatever buzzword you want to plug in. So that’s really where we saw scale and manageability as really something much ahead, even when I think the market wasn’t ready for it. We were ready for it. So that when customers were ready, we had an answer to how do I scale this to my next 10, 30, 40, 50 plants? And that’s kind of the edge management layer that we built.
David (06:23)
No, no.
This is certainly not first time you give this pitch.
John Younes (06:49)
Yeah, that might be number of 20,000 and one.
David (06:54)
And it already immediately, I would say, perfectly links or ties in with our capability map. So it’s good to see how you link to those capabilities on the different levels. So do you see yourself more as an OT company looking to IT or a modern IT data company who happen to have data connectors as well, or would say OT connectors as well?
John Younes (07:23)
Yeah, the roots of the company is industrial. Like the founding core engineers are all coming from very much the automation world or OT engineers. But I think we’ve done a great job at kind of bridging to modern technology and modern requirements and software. So where we kind of lead is very, I think, cutting edge as it relates to trying to bring more IT or DevOps workflows.
deployment mechanisms, things like that closer to the edge or to the OT world. So I think we’ve done a good job at kind of putting a nice modern spin on it. But really the core foundation we are, we’re very much manufacturing first type of people. It’s not that we’re not coming from this world and just thought, let’s try and figure out how to disrupt this. But we did see that there was just really that gap that you need.
You really need to solve the problem from the source to the destination. where people are struggling is that they have to rely on multiple vendors for this whole data journey. And it just, it doesn’t scale well. And they tend to run into challenges. So that’s why we tried to make sure that we were looking at it from a complete picture and not just starting kind of like brick by brick, but really looking at the complete picture so that we, we were ready for what, when customers were going to be at that next step, even when they made
not out-known it for themselves at some point, but at least we had an idea.
David (08:48)
Yeah,
a gazillion questions, but the first thing which now pops in my mind is, so you mentioned we were already doing these things when the industry wasn’t ready for data ops or whatever fancy word we’re picking. Why was the industry not ready and what has changed then in your perspective in the last years?
John Younes (09:17)
I think it’s just been more, it’s been, I think more of the change, change management elements that have kind of, I don’t want to say held things back, but have slowed the, the adoption. I think we had, our vision is still the same from day one. I can give an anecdotal story of how we actually started Litmus or where the concept came from. But the simple idea that we got when we got started with the company is connected, can,
collect data and connect to any kind of device and make that data available to any application in a way that they require it. That was the most simple concept of how we started Litmus was on that premise. And we’re still, I think we’re still on that journey and achieving it. We’ve just added more things in between of what you can do with that data. So for us, it’s still kind of that same concept that we’re trying to address. But I think before people,
did not necessarily see the, or there wasn’t the buy-in at a level, I think that could understand that how impactful data can be to the organization and that everybody needs data if we want to become more of a data-driven enterprise. And so I think just also kind of changing that data trapped in OT environments, how do you make that data more accessible in IT environments where you have more compute power and more modern tool sets now that can
do a lot more things with that data itself. So I think allowing that progression to kind of take place is what we had to do. I think there was big accelerants over the last five, six years, like COVID helped a lot with a lot of this digital fluency and investment in things like cloud and infrastructure. And so I think that helped a lot. then the last, we’ve seen a huge push in the last two years, I think of companies that now understand
David (11:05)
Yeah.
John Younes (11:14)
that I think there’s just a lot of urgency now around AI that if you’re not, if you’re not going to be leveraging AI in some kind of strategic way over the next, well, as soon as possible, but at least over the next few years that you could potentially be falling drastically behind your competitors. So these were, I think, big pushes for understanding. I think putting that mindset at more about data needs to be.
Data needs to be a corporate asset and something that we leverage at a corporate asset and not something that we’re doing in silos. So I think that’s more of the change that needed to take place. So previously, I think a lot of the use cases we worked on were more vertical or plant specific, but now it’s really opened up to enterprise wide type use cases.
Willem (11:58)
Okay, so before David asks another question, mean, somebody needs to make sure that we follow a bit the structure here. After this great introduction, before we go back to questions, you brought a use case along. Could you explain a bit more to make it a bit more concrete what Litmus actually does in a use case?
David (12:05)
Sorry, sorry, sorry.
John Younes (12:20)
Yeah, for sure. So the use case I can go through is a large food and beverage manufacturer in Europe. They have over 50 plants. They’ve grown a lot through the acquisition as well. So a lot of their factories are very different from one another. They were really looking for an agile way to be able to leverage and deploy an industrial data ops or data foundation, essentially.
whereby they had use cases and things that they built, but they really needed some kind of accelerant and foundation to actually be able to deploy these in a quick way. they deployed Litmus actually in 35 plants in about 12 to 18 months. So just to kind of get the data flowing and data set up coming out of all of their equipment, they started standardizing specific KPIs that they wanted.
Uh, to be able to look at mainly a lot, a lot of them were focused on, on just kind of benchmarking and looking at OEE across sites, uh, to get started with. So that was really one of their, their main focuses that they wanted a standard way that they were looking at. At OEE, so benchmarking essentially their, their, their plants. Um, they were also looking at kind of more efficiencies of, of how can we, yeah, it’s one thing to just send all your data to the cloud, but how can you actually do that more efficiently where you’re not, um, just.
whether it’s compressing the data or filtering some of that data or doing some of the more processing on the edge where their whole transformation initiative can, I think, be a little more manageable in terms from a cost perspective. So we were able to actually help drastically lower the costs on the cloud side by 90%, where they were able to then get more value later on by some of the higher level functionalities and tool sets that they could use then on the cloud with the right data.
And then they also targeted kind of specific energy and ESG type use cases were quite important for them as well. So how can we lower our energy consumption or energy footprint? In one plant, after just a few days, they discovered basically that they had a couple, just because now they finally had the right data flowing through that was standardized and they were getting the KPIs. understood that there was a
piece of equipment that was on that apparently consumes a lot of energy, but it was on all the time, even during non-production runs or when it was not being used. And they recognized that because now they finally had the data available to them. And so by just turning that off during non-production times, they lowered their energy cost by 4 % per plant, which you can imagine is quite substantial. So these are some of the things that they were able to
David (14:52)
you
John Younes (15:17)
to do across, is really move agile, move quickly. And it’s a small team. They actually did this all themselves, which is a few people deploying in over 35 plants, a proper data foundation where they had about five to seven use cases deployed in the first few months across things like ESG, OEE, looking at kind of doing more processing on the edge, optimizing their data pipelines and workflows. So yeah, that’s kind of the use case I wanted to cover.
Willem (15:47)
I think it’s the second time or maybe more that you mentioned scalability. Because let’s say it’s the bread and butter of IT. You code it once, you deploy it and then one user, 10 million users, okay, you pay a higher bill, but in essence, it’s very scalable. When you go to the shop floor, how do you deal with that scalability challenge? Because even if you’re making the same thing in a factory, line A,
is different than line B even. So it makes it a bit harder to scale solutions with your experiences. How do you manage to get that scalability going? How do you get it started?
John Younes (16:29)
Yeah, well, it starts with you need access to the data, first of all. you don’t, if you can’t, if you, let’s say you have a use case, but you might want to do that same use case in plant number two or three, but the equipment could be completely different. If you have no way to, your model might be the same or how you want to visualize or analyze the data, but the underlying data itself and the equipment is completely different. So you need something that can basically understand.
whatever the inputs are coming from the various equipment, make that data usable in a format for your applications to be able to consume that information in a consistent way so that now it’s consolidated. So we’ve made that connectivity element very turnkey. So yes, when I talk about scale, we can deploy and replicate the use case to probably 70, 80%. You still kind of need to do that last mile of configuration. I’m not talking about
custom coding or buying another product, but that next level of, okay, you have it deployed, you have your application ready, you have your data workflow set up. Now we just need to kind of configure the driver, configure the connector, make sure we’re connected to the machine with the tags flowing in, which this is like, you’re doing that in a matter of, we’re talking minutes, not days or years. So I think you can do it to a certain extent, but you still need that next mile, which is
the connection and how you’re actually accessing the data and you’re never going to be able to really, I think, achieve true scalability or replicability if you don’t have that direct native connection to the equipment itself.
David (18:12)
the other big problem we’re facing in lots of manufacturing plants is the differences you see between lines. Well, even the differences you see within one line or between plants. I would say, think manufacturing is a world where everything is different. Even if you try to be the same, you still have life cycles of 10, 15, 20 years. So you will never be able to standardize everything, which I would say…
John Younes (18:33)
Mm-hmm.
David (18:42)
directly ties into the problem of data management somehow. Something you mentioned also in the beginning in the capabilities. Data management is a concept which is not understood or only partly understood by many in OT, while it’s kind of standard in IT, I guess. So what’s the state of data management in your perspective, John?
John Younes (19:03)
Mm hmm.
David (19:12)
in manufacturing today.
John Younes (19:15)
It’s still the early days, I think. I think people are coming around. They’re starting to see the value. I think a lot of the use cases started more on the IT side of things, even for these manufacturing companies. It’s not that they’re not using data for their, potentially on other sides of their businesses, whether it’s customer data or if it’s a retail company, if it’s a CPG company, they’re looking at like retail data and things like that.
Data is not a new concept, but I think they just focused on a lot of the easier parts where data was more easily acceptable and accessible because you’re dealing with more modern data. So I think just the challenges more on the OT side is it’s a lot of legacy data, a lot of legacy data sources, a lot of legacy data formats even. there’s the way, the time to getting the data is a lot more complex and lengthy. So I think those created a lot.
higher barrier for people to just kind of avoid it until they were they they tackled I think some other more low hanging fruit use cases in other parts of the business. But now we see them kind of coming back. Whereas I think if you can solve if you can solve issues with your core production and operations and become more efficient there, then that’s going to drive the most the most to your bottom line at the end of the day. So I think that’s that’s really where
time should be spent. I think just the challenges is just the barrier to entry, I think is just very high to use data that I think maybe it scared some people off. And I think also in more the OT space, tend to have their engineers at the end of the day. So it’s people that want to kind of tinker and build things on their own. you tend to get things
done or get things as to a point where maybe you are using data in a certain way, but it’s creating a lot of shadow IT then. And then you’re just creating a whole mess of different unique non-integrated applications on no foundation that you might have somebody built 15, 20 different applications or use cases in the plant. But they’re all custom coded from scratch or they’re pulling from this system or that system.
that it’s just become unmanaged and unmanaged chaos. So even in the OT world, you do need that kind of data management itself, not only to the enterprise, but even in the plant, just because a lot of this kind of shadow IT has been created as a result of that.
David (21:53)
Have you seen good examples on some of your customers who tackled the data management problem? It’s not easy to tackle it. It already starts with who owns the model. Where do you define your data models? Do you define them as low as possible? Do you define them in a tool like Litmus, but then who owns that?
over several regions.
Willem (22:23)
Or even
like a scary word like how’s the data model governance like? Yeah, it’s a way to drive engineers away.
David (22:28)
that’s a scary word.
John Younes (22:30)
Yeah.
Yeah,
yeah, for sure. I think it comes with, first of all, the right buy in across the organization. I think it needs to come from a very high up level that basically this is how it has to be for us to all be successful. So I think you’d really do need that kind of buy in. But I think the data models can be defined, it should be defined by at a company level. It’s not, yes, there’s going to be some
some things that you can reuse across companies, but it’s going to be pretty specific to individual companies in terms of what they do. And it should go all the way down to the line level so that everybody is using data in the same way. Because if you’re not doing that, then you’re going to keep creating this kind technical debt over time that at some point somebody is going to have to clean it up. And that’s going to be the most costly thing out of all of this. So whether it’s using it in a
tool like Litmus or something else, you really need to define it at the company level. We help customers do that. Like we have templates and workshops where we work with them to help actually define what your data model should look like all the way down from enterprise to plant to line to asset. And then we actually have the governance of that is through a product which we recently launched about a year and a half ago called Litmus UNS. So you can actually define
these kinds of hierarchies and data models all the way from corporate down to the plant asset line, sorry, line asset. and then, and then to really enforce that. So it needs to communicate all through a standard UNS essentially, which is our own broker that we’ve built, which communicates over over MQTT. so people can publish and subscribe and get access to data in the same way.
David (24:22)
Now that you mentioned the three letter acronym UNS, John, one of my favorite questions to bounce back is what is a UNS for you?
John Younes (24:36)
Yeah, I think it’s a, to me it’s a concept. I think you can implement it in different ways, but it’s a concept and an idea that information should be shared through one unique interface that’s accessible on the other side to whoever wants to use that. So everyone’s kind of publishing data in a common way and then also accessing that data in a common way. So to me, it’s as simple as that. You can define it deeper than that and
try and create solutions in many different ways around that. We really take it, I think, more of the governance approach to UNS. Like how do you actually enforce this? think to your original question, Willem, it’s one thing about creating it, but how do you make it enforceable and usable? And that’s where our Litmus UNS product is, was really focused on more of these data governance pieces. And then how do you create the, the hierarchies and the enforcement of that and the deployment of that as well? So.
That’s our position of trying to focus on UNS. We see customers, I think, saying they want a UNS and they want to build it. But it honestly looks different across customers of what their interpretation is of a UNS, but they’re just calling it UNS. But I think if you take it at the highest level, as where I started, I think that’s what everyone is really trying to achieve is that common way for data to be sent and received and used essentially across the organization.
Willem (26:03)
creating some sort of abstraction layer from which you can then continue building. Now, before we start scaring away listeners even more with talk of governance and stuff like that, maybe a bit more of a trendy question. How do you see the role of AI? And I’m not talking about machine learning rebranded as AI, but like everything that has to do with LLMs. How do you see that working within an enterprise environment?
John Younes (26:33)
Yeah.
Willem (26:34)
Asking ChatGPT, can you run my plant? I wouldn’t trust it yet, but you’re a bit closer to the action, so what’s your take on it?
John Younes (26:42)
Yeah. Our take on AI is of course, Litmus provides the foundation for AI. If you don’t have good quality data coming out of your plants and make an accessible data in the right formats and structure, AI is not really possible to be leveraged. So obviously that’s what we’ve been doing always, but really our take on how to actually bring AI to the edge or to kind of inside of workflows of what
the day-to-days look like for people on the plant floor. Everybody has co-pilots, everyone has ChatGPT or plays with, likes to play with those tools, but we didn’t want to be one more chat bot. More around for us, it’s, we’re still a data foundation essentially, and we want to be able to provide more value on top of that same foundation. So by being able to plug in AI in some of the analytics workflows that customers are doing within Litmus Edge.
So let’s say you’re looking at some KPIs or variable of a CNC machine by using AI to inference that something is going wrong and what the reasoning is and providing that to the operator. So not actually taking decisions. I don’t think people are ready for that, but actually providing the prescription and the reasoning of what could be going wrong so that the person is getting that information at the right time so that now they can then take those decisions. So incorporating
whether it’s models from ChatGPT Gemini or OpenAI or Bedrock, whatever it is, we have the ability now to integrate models and LLMs from each one of those that now on top of those same analytical workflows, now you have an AI processor, which is actually taking the next step and telling you what could be going wrong and what you should be doing about it. That’s how right now we’re focused on that. And we’re also looking at
So these are new features that we announced a couple of weeks ago, which we have big signups now in our private beta for it. So it seems to be like they’re very well receptive. the other, the second part of that is leveraging AI to actually deploy Litmus Edge and the deployment of, to actually accelerate again, scale. go back to the scale story, but how do you mass deploy and replicate your…
your edge deployments across many sites in the quickest way. There’s some AI agents and things that we’ve built in terms of focus from an IT administrative point of view. So you can literally tell the prompt, deploy Litmus Edge and these 30 plants with, want to get access to these data points and connect to these equipment using the Siemens driver. Whatever it is, you tell the prompt and it will be able to actually deploy it and
And we have a demo of this where it will go down and literally you’re seeing it deploy Litmus Edge and 30 factories connecting to the equipment. So now taking it to actually removing the administrative tasks of actually configuring and deploying by leveraging AI as well.
David (29:53)
Yeah, I think that’s probably one of the cases we’ll see the most early is that, you know, this way of deploying stuff or building workflows, it is something you can, it’s a language. So you can somehow use LLMs to help out. On the other hand, there is also the question of, you trust the outcome?
Or at least is the outcome, is that a repeatable outcome? Can things go sideways? How do you protect, I would say, things from going totally wrong?
John Younes (30:29)
Yeah.
Yeah, as I would do it up to a certain degree to what you’re comfortable with and what you think, but always there should be for now at least manual intervention that is looking at whatever the prompt is at the end. There is some kind of manual intervention at the end, but at least as much as you can automate, I think that you’re comfortable with. That’s what the focus should be at the moment.
Willem (30:59)
So John, you also said you had 10 years of experience with Litmus. A question that I have is maybe let’s focus on the past because you’ve seen the domain and maturity and way that we think about data grow within manufacturing. How do you see it growing in the coming 10 years? And what do we really need to do to make that possible? Because if we stay…
If we have the same habits and ways of thinking that we have now, we’re not going to grow. So what do we need to do to unblock that?
John Younes (31:36)
Yeah, think there’s, again, I think we’re still at kind of the early days. Companies are coming around to the aspects of data management and how to leverage data from the manufacturing operations point of view at scale and in the enterprise. So it’s still going to be a lot of education over the next few years to really, I think, get people to fully be bought into this. think we’re at that, yeah, I think we’re at that kind of critical
Willem (32:00)
We can help with that.
David (32:02)
Yeah
John Younes (32:05)
Tipping point now it started, I think last year we saw it tremendously and we saw it a lot in our top line numbers as a company just because of I think how companies are starting to finally embrace I think industrial data ops or again, plug in whatever buzzword you want, industrial data management here. So it really is starting, we’re seeing the turn happen, but I think it’s still, there’s a lot of effort needed over the next few years to
to do that. think things that are going to, you always have to, at the end of the day, it’s still, you’re selling an infrastructure technology. So without, think, really looking at the customer from a business perspective, what are they trying to achieve out of this? Like you need to start with that first of all. And I think if you can come up with enough compelling use cases, or have done compelling use cases with your customers where it’s impacting their
their business from a point of view, like those kind of things will accelerate the adoption, think, and deployment of some of these things as well. So yeah, again, going back to education.
Willem (33:12)
Let me maybe rephrase it.
Another way of looking at what differentiates customers where things go super fast, you manage to scale things, you manage to create a lot of ad in a short time and customers where it’s more of a struggle to get things started. Do you see a couple of differentiators?
John Younes (33:34)
I think it’s the digital maturity of that company. Have they fully bought in to the idea of data needs to be a strategic asset? So I think that’s number one and all the way up to the top level of that company. And it really needs to come from top down that say we need to be data driven. And the companies that we’ve seen that have that top down push, just as simple as that, that we need to embrace data. We need to make sure that we’re
We’re doing it in the most efficient, best way possible to impact our operations. Those are the ones that we’ve seen actually are deploying at scale right now, where they have had that kind of top-down push. Because if you don’t have that, at end of the day, you’re going to run into roadblocks, you’re going to run into people that are non-believers, and it will just slow down the whole process. So I think we need to continue to educate at all levels, not just kind of the engineer, but help them as well continue to educate up the chain as well.
That’s really where I think we’ve seen it. And digital maturity is not only looking at it from that lens, but I think it’s also you need to be set up to be successful, whether it’s having the right network, as simple as having the right networks in place in your plants, that data can actually be, can flow and be, and you have the connection, the physical connection set up or connection set up. So even as simple as that, you could go to some plants and they’re, none of the machines are even networked. There’s still scenarios like that, which unfortunately it’s…
Willem (34:48)
this.
John Younes (35:01)
It’s not as I wouldn’t say it’s the majority anymore, but.
Willem (35:05)
AI can solve that for you.
John Younes (35:07)
Yeah.
David (35:08)
Or you go for data by pigeon or something like that.
John Younes (35:11)
Yeah,
exactly.
David (35:14)
Hey, I think we need to find a new cool buzzwords, which basically means data governance, but we don’t say data governance so that people might start adopting it as well because I think that’s still one of the core things we need to somehow figure out to make this grow.
Willem (35:35)
something fun but then of course in a way that people don’t come into the meeting and realize there’s no free drinks
David (35:43)
Let’s l-
John Younes (35:43)
That’s
really data management at the highest level, but it’s not as fun.
Willem (35:49)
Yeah, but there’s been so many efforts with data
management and I think sometimes the failed attempts in data governance, yeah, doesn’t make it easier to come back and have the discussion.
David (36:03)
But still, I think you’re spot on, John. The fact that this top-down commitment is super, super important to be able to scale the fact that the data needs to be right. think we can only, from our perspective, only agree to that point. And I think with that short summary,
John Younes (36:26)
Uh-huh.
David (36:31)
I believe this is a wrap for this episode of the IT/OT Insider podcast, where we again explored how to make industrial data work for us. So John, thank you for sharing your insights and also for our listeners for tuning in. If you enjoyed the conversation, don’t forget to subscribe at itotinsider.com and leave a rating. And see you next time for more insights on bridging IT and OT. And until then, take care.
Bye bye.