Join the Insider! Subscribe today to receive our weekly insights
Willem (00:00)
Welcome, you’re listening to the IT/OT Insider Podcast. I’m your host Willem and in this special series on industrial data ops, I’m joined by my co-author David and Andrew Waycott from TwinThread. If you want to hear more about what’s shaping the world of industrial data and AI, don’t forget to subscribe to this podcast and our blog. Welcome, Andrew.
Andrew (00:21)
Hey guys, great to be here.
Willem (00:23)
and welcome David.
David (00:26)
Thank you Willem. Good to be here as well. And in today’s episode, indeed, we are joined by Andrew , Andrew Waycott, the president and co-founder of TwinThread. Andrew has a very long history in OT and in 2016 he started, co-founded TwinThread to make industrial AI and digital twins accessible for everyone in operations. Andrew, I hope this mini intro is more or less accurate.
Andrew (00:54)
Mm-hmm.
David (00:55)
All right. So why don’t you kick things off by a short introduction of yourself and TwinThread.
Andrew (01:02)
Yeah, great. So, as you said, David, I’ve, I’ve been working in the world of industrial data and using data, optimize, industrial, factories and, and, and, field deployed equipment, my whole career over 30 years at this point, started my career in pulp paper industry, both in Canada and in Belgium. from there, I, I joined a, what was a very small company I’m called Mountain Systems. We.
developed a MES system at the time called Proficy, later called Proficy Plant Applications after we were acquired by GE. During that time, we also built Historian, which was also acquired by GE and is still today the Proficy Historian product that they market and sell. From there, I founded a industrial consulting company.
optimization company called SlimSoft, which eventually merged into a company we called Factora And shortly before exiting, or excuse me, after exiting that company, I rejoined the band, if you will, with the founders of Mountain Systems, again, my colleagues, Erik Udstuen and Tom Nettell to form TwinThread. And really, TwinThread is
all about making industrial data accessible to the non-technical engineers and subject matter experts in manufacturing and allowing those people to really deliver and solve problems in their industrial companies without having to become scientists and know all of those intricacies of that, but still
empower them to use their expertise, is how to make the stuff that they’re making to actually solve these problems.
David (03:03)
We’re going to talk about that in this conversation. Is there still a big gap between making data, would say, being a data-driven organization versus just, I would say, going with the flow from problem to problem each and every day? What’s your take?
Andrew (03:25)
For sure. mean, the, I mean the market in general, think when I started my career, you know, it was, it was, it was obviously extremely rare back then for companies to have any sort of data strategy at all, or, or, and, and really it was fighting fires day to day. But frankly, still today, I would say, the majority of companies that I visit don’t have a cohesive data strategy. And, and, and, and, it really is.
you know, how do I solve my next problem most of the time? you know, the things have changed, of course, and there’s lots of momentum towards these kind of bigger data platform strategies and things of that nature. it’s still a little bit the Wild West out there.
David (04:17)
more things to do for us, I guess. Hey, so we’ll certainly come back to data strategy, but before we do so, so as our readers and listeners know, we have this thing called an industrial data platform capability map. Not going into details right now because we have a first episode in this series where we explain the map in detail, myself and Willem.
Andrew (04:25)
Mm-hmm.
David (04:46)
But I would say, first of all, how do you perceive this map? Where do you position twin threads? Can you explain a little bit, I would say, the different components of your offering to our listeners and those who are only listening? Also go to the blog, to the article where we will also make this visible.
Andrew (04:53)
Mm-hmm.
Yeah. Yeah. So, first of all, I mean, I, I, it really hits home what you guys have put together. I think you’re really covering well the various components of, of this space. so it’s funny in, in, in your blog about reducing your data platform, you make a comment about how, how, all vendors are going to say they do everything.
And I’m no exception, I’m going to say that.
David (05:39)
Okay, I in that yeah, no, but it’s it’s We thought so
Andrew (05:46)
Yeah.
But I, I, I’m going to, I’m going to put a big caveat on that, which is you also said in that same blog that, that, you’re not endorsing any company choosing just one piece of software to solve this problem. And we, we believe that fully. so while we do cover all the aspects of that, and I’ll get, I’ll get into some of those details here in a second. I, I do so in a way that is intended to.
helps all that area as opposed to closed system in that area. Right. So, so, you know, starting at the beginning connectivity, we, do have edge connectors for a very broad set of, of industrial data sources. Any, you know, the, the classic historians, OPC, MES systems, databases, also more cloud systems like IoT connections and the like.
MQTT, of course. So we have a really rich set of ingest capabilities into our platform. From there, we kind of bring in and organize that data into digital twins. So that’s your context and data management layer. we have a rich digital twin, which I would say follows largely your asset data model structure. But we also incorporate and give
give access to data around maintenance and production, et cetera, within that same digital twin infrastructure. We use our thread engine, which is our logic engine in our platform, in a whole bunch of different ways. The first place we use it is here for both allowing our customers to add context to the data that’s coming in the raw time series data, for example. But also we use it as a
form of cleaning and organizing of that data, which falls into your data quality section. Of course, within a digital twin platform, with all that context, we have to be able to store that information. we are in essence a cloud historian within our solution as well. That being said, I don’t advocate customers ripping out their historians on prep and things like that. We view…
our solution is augmenting that and giving that additional ability to organize and provide context from our platform. But in fact, we are a historian and storage layer. Of course, we have to have a dashboard and visualization tools and things of trending and whatnot are all part of our solution. Again, I view that and we’ll get into, I skipped analytics, I’ll come back to that in a second, but
I use, I view our visualization layer as what is needed in order for that subject matter expert I was referring to earlier to be able to operationalize the problems they’re trying to solve. And so we’re not trying to be a comprehensive dashboarding tool for manufacturing. What we’re trying to do is enable people to solve problems.
David (09:07)
Yeah.
Andrew (09:09)
able to see data in order to do that. Data sharing. Again, we do, you talk a lot about the bronze, silver, gold type methodology. As part of our solution, we actually provide data sets that we call them curated data sets, where we take data that we use the context we have to
pre filter, pre organize and aggregate data into easily consumable data sets for tools like Power BI or any other BI layer tool. And we can either present them directly to those tools like Power BI, or we can drop them into like a drop zone for data, no flake, et cetera, to pick up in essence by doing so we can essentially curate and provide that gold layer data for.
David (09:55)
Yeah.
Andrew (10:07)
take forward from there. And I skipped analytics, which frankly is the core of TwinThread. All about what we do is about presenting use cases for manufacturers to be able to identify, optimize, and solve problems in their manufacturing facility using AI or enabler of that and drive that through to actually operationalize solutions for the manufacturer.
running in the plant. And of course, I’m sure we’ll talk more about that as we go along here.
David (10:41)
Wow. That’s a lot. That’s Yeah, you did.
Willem (10:44)
So yeah, you covered every field.
Andrew (10:46)
Yeah.
As an example
with connectivity, I have no aspirations to be everything for everybody. I’m sure we’ll use tools like, for example, Hi-Bite and Litmus and those types of technologies to bring data into the ecosystem. And if they do, that’s great for us. We’ll just use that data and those streams as well. we’re not enclosing the thing, but we do have enough pieces that
David (11:04)
Yeah.
Andrew (11:18)
that subject matter expert can get started without having to build an entire sort of architecture themselves before they can even solve the problem.
Willem (11:29)
talking about solving problems. I think you also brought like a use case, maybe to show a little bit and make it bit more concrete. So tell me more.
Andrew (11:39)
Yeah,
did. To tell a story about one of our customers. Hills Pet Food Hills is owned by Colgate. We work closely with both Hills and Colgate, but with Hills specifically, the the story I wanted to tell is about our our work in their dry kibble dog food or cat food plant plants.
So in their facility, we have implemented all those pieces that I’ve just talked about, connecting to their Aviva, Wonderware type infrastructure, data historian, et cetera, pulling in data from hundreds and hundreds of data points per line. And then using that data, we targeted optimizing the quality of their process.
You know, with, with doc booter, you need to have the right moisture level, the right fat level, the right protein level, et cetera. won’t get into details of, of, of the manufacturing process, but, what we have actually are, have been able to do is using our platform and, and, and our perfect quality module, which is essentially a, a very wizard based, walkthrough of how to configure.
and as I said, operationalize very importantly, the process of optimizing quality. We’ve been able to predict the quality of their food at the end of the line, but with the timeliness of them being able to make changes at the beginning of their life, so as to never produce a bad batch of product. And even more importantly, optimize those qualities
parameters such that we can target both the perfect location within spec to hit at the best profitability slash yield point for that process. And so we’ve done that at scale now across every single line Hills has around the world. I believe it’s like 18 ish production lines globally.
And it’s a fully closed loop solution. not only are we analyzing and presenting information about what is driving all this type of, of, of, of, variance in how their, quality is, in those results down to the control system so that their process can auto adjust itself to always hit that perfect quality. obviously that means
Willem (14:18)
you
Andrew (14:33)
a very consistent product. Another knock on benefit of that though is as we’ve looked back over the past years with challenges around hiring staff for manufacturing facilities, et cetera, as they’ve seen post COVID a dramatic reduction in the average tenure of an operator, we’ve actually driven an increase because our product
as essentially embedding all of those black magic secrets that were in operators pockets into our solution. And so we’re actually reinforcing daily, hourly, by the minute, best practices within their operations.
Willem (15:19)
Okay. You were mentioning closed loop. I don’t think we’ve seen that many examples in the life of closed loop, or at least it’s not common, I’d say. And so you seem like somebody who has a lot of experience with automation systems, implementing MESs and systems like that. How do you go about implementing this in a closed loop system?
Andrew (15:39)
Mm-hmm.
Yeah, so you’re right. I’m proud to say I think we’re the first company that has been able to, at that scale, do a closed-loop, AI-empowered quality solution in a food industry. So that’s something we’re very proud of. This was a number of years ago that we completed that work, so it’s been true for a while now.
I think a lot of people spend a lot of time focused on edge when they start thinking about closed loop and speed and know, millisecond response times and things like that. But in most industries, if you actually take a step back and look at how operators control their plants, they’re not sitting in front of a screen with their finger over the enter button.
to do something, right? They’re walking up and down the line. They’re looking at different screens. Realistically speaking, most operations are making it maybe a couple of times an hour, you know, right? And so closed loop doesn’t have to be millisecond response. Closed loop, I would argue in most process industries shouldn’t be millisecond response because there’s an inertia inside of
a line that you need to see the results of what you’ve done before you can do something else. So, you know, it’s really not that hard, frankly, to push down recommendations every minute to a control system. Now, being careful, of course, where you’re pushing down to recommendation registers in a PLC for example.
logic in that PLC guard rails in that PLC to make sure not something crazy isn’t being done. And if those guard rails are passed, auto accepting those recommendations into the the set points. But of course, having a switch so operators can flip the switch depending on you know, if something is going wrong, they can take back control and do it the way you know, they always did. But but doing that again, conceptually is not hard.
It is also much easier for operators to accept that because they’re still in control. can make decisions about when to accept it or not accept it if need be. And it follows the paradigm they’re used to. So from an acceptance and cultural transformation perspective, it also is a significant improvement.
Willem (18:35)
Okay. Yeah, I think technically it’s never really the issue. It’s more like everything that surrounds it. Like the people on the shop floor, how you’re programming your PLCs even because it’s not just some cloud solution that’s going to deliver everything. You also need close collaboration with the shop floor at that moment to make close clue happen. So they need to be on board with this.
Andrew (18:58)
Yeah, absolutely.
Yeah. One thing that we have, I’ll say that we have learned, you know, we used to in our early days of kind of dabbling and touching closed loop, we used to start our conversations with customers of saying, you know, don’t worry, we’ll get there someday. This isn’t, this isn’t something we’ll do tomorrow. Cause it was scary and it’s still scary. Most people in manufacturing, what we’ve learned is the best thing to do is to start the project thinking about that.
And as I said, if you think of that analogy of the point registers in a, and an accept button, like there’s no reason why you can’t even on a pilot project, because somebody’s physically pushing a button to do it. Right. And, and as soon as you’ve done that from day one, you’re set up for that pilot project to be successful and evolve into a fully closed loop solution. why.
add steps of, are we ready? Are we ready? You’re not introducing risk by doing that initial kind of integration almost right away.
David (20:12)
That’s probably okay. So the closed loop thing is something indeed. It’s not too common. But I also want to link that back to an overall strategy. You talked about data strategy. What I see happening quite a lot is very ad hoc projects. Like we’re going to try something over there. We’re going to use this tool. We’re going to play around with.
Whatever. if you’re really, that’s, would say that ad hoc in your approach, then first of all, it is very hard to do something clever with data, let alone AI. You somehow need to build that. Someone needs to, would say, sponsor the strategy. So what are good strategies from your perspective? What makes or breaks a data strategy?
Andrew (21:10)
Yeah, so.
This, guess, may be an area where we differ a little bit from the, I guess, the common thoughts in the market today. Our platform, really, at its core, what we’re trying to do is accelerate through, get the data in the cloud, and get to solve problems as fast as possible.
There’s been a lot of money and time spent on this theory that if we get the data in the cloud, miracles will happen. And very little time has been spent on, OK, how do we actually drive value? So TwinThread is all about being able to start small and go fast.
and solve problems and get value all along the way. But because our platform is as broad as I described earlier, you you’re not, you’re not doing a little pilot project in a corner and then realizing that that can’t scale the platforms to scale from day one. And, and, you know, but that doesn’t mean you have to take on everything at once. means you’re, you know, you’re following within the, the, the guard rails that we provide.
A path that will get you to scale, you can start really small. And personally, I think that’s that’s the way to go. You’re you’re you’re you. And I guess the the. The big challenge, and I I’m not sure if this is for right now. But one thing that I see as a really big challenge in the market right now that I hope is going to change over the next few years is. As all of the.
different companies out there, the end user companies are building their own data lakes. Essentially what the snowflake and I don’t mean the company, I mean the thing that falls from the sky, a snowflake of data structures and standards. Essentially we’re going from a world where there was one standard, historic
David (23:36)
Yeah. Mm-hmm.
Yeah.
Andrew (23:40)
big value, right, to a thousand different or 10,000 different ways of saving that data. And that disparity is causing it to be really hard to take advantage of the learnings of others. we’re in, hopefully we’re providing enough of that to started easily.
David (23:58)
Yeah.
Yeah.
Yeah, indeed. And you’re absolutely right. And I had the discussion also a couple of times. Well, actually I have this discussion multiple times a week is what structure do we follow? How do we get our data in whatever cloud we’ve decided to use? everybody’s, we are today again in a scenario where everything is again bespoke.
Andrew (24:27)
Mm-hmm.
David (24:34)
every one builds their own thing because there is indeed no default standard.
Andrew (24:39)
That’s right. It’s really, frankly, I’m a little worried about it. know, and if you just look at Twin Thread, right, as I talked about at beginning of the story here, we’ve got all these out of the box connectors to every historian imaginable and things like that. Well, why can I do that? It’s because there’s a standard there. It’s easy for me to connect to historians because there’s a paradigm that is relatively strong, right? If you go look at
Willem (25:08)
you
Andrew (25:09)
how that same concept of storing time series data is being done in Databricks, for example, and this isn’t a dis on Databricks. It’s just using that Databricks tool. I could go to 20 different companies and there will be 20 different ways of storing that data at the silver layer. right. And, and you know, obviously the gold layer, it’s going to be different, but that’s okay, I guess. But even at the silver layer,
clean data that’s ready for use. That clean data looks different at every single company you go to.
David (25:47)
Yeah, no, for sure, for sure. It’s a big challenge and I really hope that in I would say in the years to come, we again go to some of those, I would say, yeah, the standard way of working where just to get, be able to scale, because that’s the big problem. The big problem is not really.
everybody building their own thing. Well, that also is very costly, obviously, but it’s hard to maintain and it’s even more difficult to scale. And from a supplier point of view, every time you go somewhere, you’re faced with a new, I would say, with a new data problem to solve.
Andrew (26:26)
Exactly. Yep.
David (26:30)
Yeah.
Willem (26:30)
Talking about scalability, because that’s also one of the things, doing a little bit of research about you guys. think one of the selling points you mentioned was scalability. Now, I would be interested to know from you, like, what do you mean with scalability? mean, there’s technical, but I’m sure there’s other aspects. Let’s just start with that. Like, how do you define scalability?
Andrew (26:55)
Yeah, well, I mean, I think, like you said, there’s some technical ones and there’s also organizational ones, I guess. Right. So just speaking technically for a minute, we have over a million digital twins running in our platform today. 50 million sensors and over two million AI models running. So that’s technical scale. And in
Some of our company, our customers, it’s very common. I’ll get questions. My first company I worked at, Pulp Mill, here in Canada, we had a hundred thousand tags in our Pi system. And that was mind-blowing. But it’s nothing compared to the amount of data that is out there today.
deal with the scale of data and devices and all that complexity really well. The AI models one is actually really interesting. People don’t really think when they’re doing those pilot projects at the beginning that like managing an AI model and keeping it healthy. It’s huge. Most data science project like retraining and
ensuring that the latest is deployed everywhere, et cetera. so within our platform, we actually automate a lot of that as well, the retraining process and the health of the models. And the other thing that’s really key to scale is if you do have a scenario where you have a fleet of pieces of equipment that are the same, you you
You want to be able to manage that model at the fleet level, not down at each individual device level. But what’s interesting and what we’ve done, I think relatively uniquely is what’s the reality in most manufacturing facilities is you may have the exact same motor or a dotted line in your factory. You may have five lines that are identical machines, but they actually aren’t identical anymore.
Right? And so the idea of building one model and having it run everywhere is, is really not that practical. So what we’ve, what we’ve done is we’ve designed a solution, which we call the model factory, which allows us to automate the process of training each of those lines with data from themselves uniquely while managing it at the macro level. So that means that each
line essentially has its own hyper-personalized model for each line and each product within each line so that the modeling is scalable across an entire organization and automated so that one person can be managing an entire corporation’s worth of industrial data. So there’s the technical part. Organizationally, I think it comes back to that operationalization comment I made earlier, like
designing a solution that and our kind of box solutions are designed such that that it’s walking you all the way from the data science of figuring out what data I need, et cetera, et cetera, all the way through to how do I get this in front of operators and close the loop? And we believe really strongly that unless you start your project with that sort of end goal in mind of closing the loop, you can’t globally.
with a solution and actually deliver the value you’re hoping to.
Willem (30:55)
Could you go a bit deeper on the last point? Like why is that closed loop part so essential to achieve the scalability that you’re looking for? What happens if you don’t have it?
David (30:56)
Thanks
Andrew (31:07)
I mean, how many projects have you seen in your life or I can certainly say in my life that I’ve seen where you build the dashboard or you provide some level of insight or even a project has nothing to with IT that you design a new
you know, black belt, six sigma project around X, Y, and Z. And you, you leave the project and you’re so successful and everybody’s happy. Then you go back and visit that same company three years later and nobody’s doing anything related to what you, you did anymore. Right. It’s become shelf or, or whatever. And so that, I think that’s my point. Like you.
David (31:59)
Yeah.
Andrew (31:59)
You need to have something that actually embeds into the daily work practice of the team in order to ensure that that’s actually going to continue for the long term. And the value is going to realize because you don’t realize value from a AI machine learning project on day one. You realize it at the end of the year when you look at your financial statements and you see additional profit at end of the year. And that only happens with continued
use of the solution for the long term.
David (32:34)
It’s interesting because it’s absolutely starts small thing big but with a non-optional thing big. Where you go like we need to be, I would say we need to reach that certain point in your case. We need to be able to automatically have these AI models run in closed loop. We’re taking steps towards it but the end goal is still clear.
And I like that a lot because it’s a bolder statement than saying, we’re happy if we see the trends.
Willem (33:10)
I would even
put it slightly differently. I see a lot of analogy with what’s happening for more than a decade in DevOps, where it’s not just about the practice of deploying, bringing operations and development together. It’s also about automating as much as possible because that frees up capacity, it takes stuff off your mind, and it makes it repeatable. I feel something in the same direction.
If you automate it through closed loop, your model is there, it’s taken care of. You don’t need to work on change management, tracking, asking people to do it all the time. It’s there.
Andrew (33:51)
Now I am going to hedge my bets a little bit here and say every industry isn’t going to be able to do that, right? There are some practical. So closed loop in some industries may be actually a screen that has the recommended set points on it and a work process for operators to, you know, do the decision, the final decision making of implementing that in close. That may be the best you can do in a certain industry.
but it’s still having that mindset of we’re getting to an operationalized thing that’s going to actually affect the shop floor and how it runs. That’s what’s actually going to realize value.
David (34:34)
And maybe another totally different question. We do some work around IT, OT, convergence, collaboration models, et cetera. We talk about the simple fact that you need technical cooperation and organizational cooperation. suppose you go to a customer and you’re pitching to Intred.
Andrew (34:52)
Mm-hmm.
David (34:59)
What are red flags for you when you’re talking to a customer and you hear something? What are the red flags where you go like, yeah, chances that we are going to do a successful project over here are going to be, I would say, very, very low to say the least.
Andrew (35:18)
Yeah, mean, I think in our case,
automation and.
some relative sophistication in the controls layer is sort of key requirement, right? We do often still, as I said in my intro, we still often talk to customers that are taking their center lines and writing them on a piece of paper, you know, and things like that. And they may be excellent candidates, but they require
David (35:34)
Mm-hmm. Yeah.
Andrew (36:00)
a couple of steps of transformation to happen at the beginning of the project. And so when I’m in situations like that, it’s not so much that that company is a red flag long term, but there there there’s somebody that we will need to pull in a partner to work through that initial sort of automation. Their work prior to them being an ideal candidate for leveraging our platform. Yeah.
David (36:20)
Mm-hmm.
I would say that’s on the automation side of things. Do you also see differences in overall maturity related to data, having champions available about IoT collaboration, that type of things?
Andrew (36:49)
Yeah, I would say so.
Obviously the dream scenario is IT/OT gets along and loves each other. I’ve seen that once or twice in my career. The reality is I do think you need to have champions within the organization. ideally quite senior champions within the organization.
David (37:21)
Yeah.
Andrew (37:23)
to certainly to get the scale. Even though I talked about our, you know, what we see as the keys to a successful value being, you know, scale and operationalization. Of course, we do very often do pilots with companies that start quite small and where they have maybe one guy.
one team in one plant that is forward thinking. And quite frankly, those often struggle if the cultural senior, the senior vision differs from that person’s vision. So to get from that first line to all the lines or all the lines to the next plant, you really do need that sort of kind of more senior buy into the importance of strategy of
David (38:04)
Yep.
Yeah.
Andrew (38:17)
of data and industrial AI and things like that in order for that to be successful. The good news is I think that’s becoming more and more common. If I compare it to nine years ago when we started, it’s a very different world today in that regard.
Willem (38:37)
Well, maybe one final question, given that you have quite a longer experience in the field. Where do you see, or what are, according to the roadblocks when it comes to digitalization initiatives, specifically manufacturing, where do you see like the problem is now? And if you could get this unstuck, we would make a next wave happen.
Andrew (39:04)
I honestly, I kind of already, I already let the cat out of the bag on this one a bit, my comments about the lack of standard for landing data in data lakes. I really see that as a major impeditiveness to us being able to move forward as an industry, right? Like just…
Again, if I think back to where I started my software career with Mountain Systems, right? Like we built a MES system on top of Pi. That’s what we started doing. And it really started out being a quality system on top of Pi. And then we grew it to a full MES. That would be very hard to do today. To build a MES system on top of…
David (39:59)
Yeah.
Andrew (40:03)
people’s cloud time series data stores, right? It’s just think about that. You know, and that, that, that I believe is, the core blocker and, and, you know, what we need as an industry is to maybe as a, as a silly example here, like, you know, if we think about MQTT and the UNS world, right? MQTT is central to that. Well, MQTT has no standards.
Right. You format you want it. It can look, whatever. sparkplug B takes that technology and it layers a standard on it so that you can now interrupt between all these different systems. Right. that’s why all the UNS stuff went on, et cetera. That’s why. Right. Now what’s that equivalent for stored data?
in the cloud. There is not one. And until there is one, it’s going to be very hard for companies to truly realize the value of this. If we get our data in the cloud, we’re going to see miracles happen. Unless they just build everything themselves from scratch, which of course some companies are doing and being very successful off. But the ability of that to be
accessible to the medium and small manufacturer, you know, that only happens when standards truly become something that we can count on.
David (41:42)
Absolutely, and it’s not just about the cost of building this thing yourself, which I think everybody understands. Well, maybe everybody understands that there is a cost. Most people, I think, tend to underestimate that cost.
Willem (42:02)
All people
tend to underestimate ghosts. All.
Andrew (42:04)
Big time.
David (42:05)
for really all people.
But the interesting thing is that what you actually wanna achieve is you wanna make it so easy for people to experiment, to make changes, to grow. So you wanna reduce that cost of experimentation to the lowest possible either time or monetary investments. So people across the enterprise can just go like, here’s an idea.
it works or it doesn’t work. then that’s the big problem, the big outcome of being, I would say, in such a bespoke mode.
Andrew (42:45)
Right, exactly. And really that is in this world where we don’t have standards, guess, you know, that is what we’ve tried to help with, right? So that’s why if you look at our website and our offers, we talk about perfect quality, perfect uptime, perfect batch, energy optimization. These aren’t just ideas. These are actual, like in many cases, wizard-based things that a
engineer in a factory that gets an idea like you just said David can actually go and solve Right themselves and see if it’ll work on their line and if it does work on their life they can try it on the next line and and the next line and it’ll expand from there and you know that that’s why we’ve done that and and and and hopefully taking the complexity of of of all of this Stuff than these blockers and things that we’ve been talking about away
David (43:25)
Yeah.
Andrew (43:43)
in the reality that we live in at this time with the industrial data platforms.
David (43:51)
Perfect, perfect. think that’s a wrap for this episode. Well, absolutely. In this episode of the IT/OT Insider podcast, we again explored how to make industrial data and in this case also AI work for us. Very big, big thank you.
Andrew (43:55)
He
David (44:13)
taking part and sharing your insights and to you our listeners for tuning in. If you enjoyed the conversation, don’t forget to subscribe at itotinsider.com and leave a rating because it absolutely helps. And I’ll see you next time for more insights in bridging IT and OT and until then, take care. Bye bye.