Join the Insider! Subscribe today to receive our weekly insights

?

Willem (00:00)
Welcome to the IT/OT Insider, David. Today we brought Dan Jeavons from Applied Computing. Now he’s doing some pretty exciting stuff. Yeah, Dan, welcome.

David Ariens (00:07)
Mm-hmm.

Dan Jeavons (00:14)
Thank you so much for having me. It’s brilliant to be with you. I’m really excited for this conversation.

David Ariens (00:20)
Absolutely. Yeah. I was looking forward to this conversation already for a while because this is going to be a data and AI conversation that’s right up our alley. ⁓ So yeah, then you’ll introduce yourself, I did do some, I did dig a bit into your history. You started at Shell ⁓ in five ⁓ different roles. ⁓ You’ve also become a VP of computational science and digital innovation, which is really, really interesting.

more about what that means ⁓ in the context of a big energy company. ⁓ But as Willem said, so since a couple of months, half a year or so, you are the president of Applied ⁓ Computing. Also really interesting technology we would like to talk about today. So then also from my perspective, thank you for joining us. Can you just start by giving us a walkthrough ⁓ to your career?

Willem (01:04)
Absolutely.

Dan Jeavons (01:17)
Yeah, sure. so, so let me say, like many people, my career wasn’t particularly well planned. And that’ll be a recurring theme through this conversation. I left university and joined Accenture. My client at the time was Shell. And I spent, I started off working in the North Sea. working in Aberdeen, I should say, ⁓ with some of the platforms that Shell was running at that time.

I didn’t really intend to end up in oil and gas. My dad was in retail. That was actually what I was interested in doing, but that’s where I got assigned. and I think what it was fair to say is, ⁓ although it wasn’t where I intended to end up, I just loved it. I absolutely loved it. became thoroughly fascinated with the energy system and the way that it powers our world. And the more I got into it, the more I realized that technology is going to play an enormous part.

Willem (01:48)
Okay.

Dan Jeavons (02:12)
in the transformation of the energy system as we know it. At the time, and you can argue how transformational it was, we were trying to put in large ERP systems to capture data across the value chain and to help drive improvement. ⁓ I think so. Exactly. That’s it. And I think also, you what I realized, and I think this was pretty foundational insight, actually, and still relevant today is that

Willem (02:25)
I think those had a big impact. mean, try to think them away and wonder how things happened.

David Ariens (02:25)
Yes.

Yeah.

Dan Jeavons (02:42)
The biggest value of having an integrated system is the fact that you have an integrated data layer that you can then use to drive insights into your business continuously. And so I gravitated very quickly towards that world because I was like, I don’t like the system much, but the data is really interesting. So I started running extracts from SAP to do initially basic statistics and KPI reporting, and then gradually more more sophisticated statistics that could help improve business processes.

And I spent the first part of my career traveling around Europe doing that. At some point during that Shell recruited me and I ended up doing the same thing in London for a while. and then I spent a stint in strategy. So I worked with our CIO at Shell, trying to put in place strategies around how you think about process and data. spent time in architecture, designing the first generation of IOT systems, which I learned a lot. Some of them worked, some of them didn’t. we can talk about that. ⁓ and then.

I guess, probably 13, 14 years ago, I was completely convinced that the world was about to change. So what I saw was that everything we were doing was like, you know, the spinning Jennys in this in the front room before the industrial revolution and that data science, advanced statistics, AI, that was going to be the future. We didn’t call it AI back then, but the idea was the same. And so I started working in the cloud initially.

And then started developing an advanced analytics team to look at how we could apply advanced statistics at Shell to our work as part of production grade software. And that was really the mission, which was how do we start to embed this stuff into our day-to-day working to augment the existing capabilities, which are primarily physics-based models. And I really, spent 13 years doing the same thing, which is getting better at doing that. And it started off.

We called it when I started the Advanced Analytics Center of Excellence, which I often tell people was neither advanced nor a center nor excellent. We were pretty terrible. But we got a lot better quite quickly because we had to and also because the technology was in our favor and things were moving very, very quickly. And by the time I left, I was VP for computational science and digital innovation, which meant I led both Shell’s AI modeling teams, but also Shell’s advanced physics modeling teams.

David Ariens (04:36)
You

Dan Jeavons (04:58)
And my role was really looking at some of the most sophisticated problems that we had across Shell’s business from a technology perspective to develop solutions to those that would drive additional value. And so, you know, I have a deep background in all of the above and also the scars to prove it in my 20 year history. And I think the final thing to say is that this is also what led me to apply computing because what I’m most excited about, I’m sure we’ll get into it.

is that what we see now for the first time is these disparate modeling disciplines are finally converging. And I think that this is an inflection point for our industry because historically it’s been extremely difficult to deal with the fact that you either need to trade off accuracy and rely on physics in terms of the predictions or you need to rely on statistics and trade off explainability. And that trade off has been fundamental to the way that

Almost everything has evolved from statistical process control to more advanced analytics to the advanced process simulators that we use today. And what I see is that the next generation of AI is going to eliminate that barrier because we’re already seeing the confluence of physics and AI. We see that in the academic disciplines every day. Now there are new articles being published about this. And at Applied Computing, we’re at the frontier of developing foundation models for the energy system, which demonstrate the capability to do just that.

And that’s what gets me excited at the moment. That’s what gets me out of bed every day. That’s what we’re working on and working with our customers to try and accelerate that transition.

David Ariens (06:27)
That’s what brings you to hotel rooms across the world.

Willem (06:30)
the world.

Dan Jeavons (06:30)
That’s what brings me to hotel rooms across the world. That’s right.

Yeah. So it’s, it’s all good.

David Ariens (06:36)
Hey, so before we dive into the nitty-gritty data AI stuff, I’m sure that for a big part of our listeners, the upstream oil and gas world is something really vague. What is that world all about?

⁓ You mentioned foundational models, there’s probably also some exploration, there are probably different parts of this world, so could we start there?

Dan Jeavons (07:08)
Yeah, for sure. I mean, what I say is, you know, I’ve spent, I’ll maybe I’ll broaden the question a little bit. Cause I think what’s interesting from a data and AI guy perspective is that, and I’m sure some of my engineering colleagues will hate me for saying this, but the reality is from a data perspective, it all looks very similar. And the reason I say this is because a few things. number one, what is it? In the upstream space, what happens is you have a well, which has been drilled. You put that into production.

The producing well has a whole bunch of telemetry associated with it, which tells you how the production is happening. That’s connected by a series of different lines and different configurations. There’s typically multiple wells and they get connected to a producing facility. In the offshore world, you tool their top side and that connectivity allows you to pipe the oil and gas up through a variety of systems, which can be sometimes you need pumps in there. Sometimes the pressure is sufficient to take it up to the surface.

And then effectively that top side is connected through a series of pipelines ⁓ to the beach. Along the line, typically on a production facility, you end up with a separator, which allows you to split up the oil, the water and the gas, which is coming out of the well. And those volumes are then piped through the system to the point at which they can be distributed and pushed into the grid. in the case of gas, the gas gets processed and put into the grid.

In the case of oil that to us transported and then the water gets processed. That’s in a nutshell, the systems that we’re talking about. Now, let me pause for a second, because I will also say, you know, I’m an IT guy, first and foremost, the data guys I alluded to in my background. And so I look at the world from that lens, which is a bit different to the classic engineers. But from a data perspective, what does that mean? Well, it means you have ⁓ a distributed control system, which I often call a data capture system, which no one likes very much, but

David Ariens (08:55)
Mm-hmm.

Dan Jeavons (09:08)
In reality, the, you know, the distributed control system is processing lots and lots of data, bringing it together and then executing control events, which then allow you to move things around within the process based on physics based simulation. That’s typically how these processes work. And the DCS systems look very similar, whether you’re look, whether you’re in a refinery, whether you’re in an upstream oil and gas facility, whether you’re in an LNG train or whether you’re in a petrochemicals plant, because the vendors are the same.

David Ariens (09:33)
Mm-hmm.

Dan Jeavons (09:36)
So you’ll find ABB, Honeywell, Schneider, et cetera. ⁓ And so these systems dictate how a lot of these operations happen. And then from a data perspective, what tends to happen is you push a lot of the data into a process historian. And the process historian captures all the time series. And that’s what the engineers do to use to generate the insights on the basis of what’s happening every day.

David Ariens (09:38)
Yeah.

Yeah.

Dan Jeavons (10:03)
and think about reliability and integrity and all the good things that happen within the supporting engineering disciplines. So what I’m trying to say here is that almost regardless of which type of asset you’re talking about, the data landscape looks very similar. The tagging system looks very similar. And for me, one of the things that I realized quite early on is that creates an opportunity because the algorithms that you’re running, you know, they might need to be tweaked, but there’s a lot of commonality that can be adapted in order to allow you to deal with different types of

assets within the oil and gas and energy supply chain without having to make drastic changes to the core software that you’re running. And by the way, this is not a new insight. This is how Honeywell and ABB made their money. They figured out the same thing.

David Ariens (10:40)
Yep.

Yeah.

Willem (10:48)
think this is one of the most exciting statistics talks that I’ve heard in a while. ⁓ David and I also really like a lot of statistics. Now, you mentioned 13, 14 years ago. I mean, it was like data science, machine learning, everybody needs to have Hadoop even if they didn’t know why or what it is. ⁓ What’s different between now and 13 years ago? What changed the mathematics, the technology? We have Nvidia chips that make money or what is it?

Dan Jeavons (11:16)
to.

Yeah. Well, let me tell you some horror stories to start with, because that’s more fun. And then I’ll explain why I’m telling the horror stories. I mean, our first solutions were, we were actually using statistical packages, which were developed in a a programming language called R. I don’t know if you guys remember that one. Yeah.

David Ariens (11:37)
Yeah, yeah, I’m a mathematical engineer then so I come from the MATLAB

Willem (11:37)
yes, yes, absolutely.

David Ariens (11:41)
R NPC type of…

Willem (11:44)
I was really good at MATLAB too.

Dan Jeavons (11:47)
Well, so funny, funny story. So we at that time, we had two two solutions in Shell, we had an R server, ⁓ which was actually developed by the company called Revolution R that they’re not around anymore. And then we also had a ⁓ another capability, which was the MATLAB production service, we could deploy MATLAB into production and make that work. And we would connect those things to data feeds coming out of databases. And I think

Maybe the revolution started with parallelism really. And what do I mean by that? think ⁓ at the core of a lot of the change was, I mean, if you remember R was single threaded, right? So, I mean, it would take absolutely ages to do anything. ⁓ And you can imagine trying to run single threaded algorithms across a scenario where you’re dealing with

Thousands of pieces of inventory, for example. It’s a very cumbersome process.

David Ariens (12:45)
So

I actually bought, I hid a computer under my desk, which I would start the computation in the evening. And then when I came back in the morning, hoping, praying for an outcome.

Willem (12:58)
Nothing went wrong along the way.

Dan Jeavons (12:59)
Yeah, exactly.

Well, I remember a very similar story. My statistician used to have this enormous laptop with as many as many CPUs as he could possibly get in it at the time. And, and he would do the similar thing. And this thing would be, you know, heating the room because it was processing so much. But yeah, these were these were the things that we were doing at that time. And I think the reality was that

David Ariens (13:16)
Yeah.

Dan Jeavons (13:29)
For me, actual, the first step change was Apache Spark. ⁓ so I got very excited about the developments that Databricks were doing for two reasons. And you alluded to it. Number one, was an awful system. ⁓ I mean, it was, it was designed for the internet. It was not designed for, you know, corporate data management. ⁓ and so yeah, I mean, I thought you probably had similar experiences with it.

Willem (13:52)
We try to explain like, you know, it’s a technology, it has its uses, not in this use case, you know.

Dan Jeavons (13:59)
Exactly. And so I think for us, you know, what was exciting about Spark was suddenly you had an MPP layer that run on, ran on a file system. And that was a real step change and a breakthrough in the enablement for doing, for bringing this type of technology into the enterprise. And so we were a very, Shell, were a very early adopter of Databricks technology. You know, we’re very closely with the team.

And that really started us down the road, which ultimately ended up in us realizing that we could start to bring time series into large scale data lakes and then train algorithms on top of them. And that probably for me was the breakthrough because all of a sudden you can go from. Frankly, offline batch systems using, you know, interpretive algorithms on, on lag data to a real time system that can give you actual predictions. Allow you to start to make interventions. And I think, you know,

David Ariens (14:53)
Yeah.

Dan Jeavons (14:55)
Everyone was doing similar things. It went down the route of predictive maintenance first. That was the most obvious use case where you could identify anomalies in advance and then have engineers explain those. So I think that was the real breakthrough where that started to happen. And then of course, in parallel, the algorithms just exponentially improved, driven by the increasing volumes of data that were generated by the clouds and the internet. And that of course then meant that we could start to bring deep learning together with some of these new data frameworks.

David Ariens (15:05)
Yeah.

⁓ This brings us to the ITOT discussion. ITOT. Let’s, so I think the people who follow our work for a bit longer know that we don’t really like the term ITOT convergence as it implies a lot of things which are maybe not per se ⁓ what it should be all about. But these ways of working, they, then how.

It’s only possible if somehow you bring those two silos together because it’s not just about the pure technologies, you already mentioned Databricks, but it’s also about ways of thinking about process know-how. So how do you, if people ask you like, is ITOT convergence? What is your answer? How do you see this?

Dan Jeavons (15:58)
100%.

So I’m going to be really controversial here. Maybe it’s not so controversial. think, I think we’re, you know, I think we’re only just at the very scratching the surface, you know, in terms of what’s possible with converging the IT and OT estates. People talk about convergence. I don’t believe that’s happened yet. I think what we’ve done in most businesses, and so I have the privilege now of not only dealing with, not only knowing the Shell story, but also dealing with a lot of other companies too.

David Ariens (16:21)
Yes, no, shoot.

Yeah.

Dan Jeavons (16:47)
And what I see is quite consistent, which is you have a distributed control system, which is at the heart of the operation. And then you have a bunch of siloed engineering processes that surround that, whether that’s reliability engineering, integrity engineering, whether that be, you know, ⁓ maintenance, ⁓ safety, et cetera. Right. And all of them have their own tool of choice, which sits in the office domain, uses a fraction of the data that’s available to them.

typically in an offline, they do some form of data driven stuff, do some form of physics stuff, and they use this to generate insights. Now,

David Ariens (17:20)
Mm-hmm.

Dan Jeavons (17:28)
In most assets that I’ve been to, they might be a bit more sophisticated than that, but that world still exists. I don’t think I know any asset where that world doesn’t exist. Now what’s happening gradually, and I alluded to it exactly to your point, is that these cloud-based layers are coming in where you can start to put much more data into them. And as a result, you can start

David Ariens (17:38)
Yeah.

Dan Jeavons (17:51)
when you combine that with AI, you can start to be much more predictive because the AI algorithms are just much more predictive and accurately predictive if you give them sufficient data to train on. So if you put a lot of data into a cloud-based system and the user transformer architecture, so that one of the most modern forms of deep learning, you can get a highly predictive algorithm, which is very good for certain tasks. Typical tasks include things like corrosion management. So identifying where you have corrosion spots using drone footage or

David Ariens (18:16)
Yep.

Dan Jeavons (18:20)
I’m trying to identify, as I mentioned, anomalies which may result in failures across the facility. But really, if you think about it, so being critical of my own work and the things that I’ve done over the last 13 years, and again, talking about where we haven’t succeeded yet, what we’ve done is we’ve added a layer of intelligence on top of the existing systems. We haven’t changed very much. You you now have an extra system, which okay, gives you some more insights, but it hasn’t changed the work process yet.

David Ariens (18:39)
Mm-hmm.

Dan Jeavons (18:48)
And so I think the real challenge for us now is to think, is to not just think about how we can use AI to provide insight, which was the original selling point. It’s about how can we use AI to transform the ways of working within these facilities and alter the mechanisms through which the traditional engineering disciplines interact in order to operate the plant and to maintain the plant and to manage the integrity of the plant. And for me.

David Ariens (19:12)
Yeah.

Dan Jeavons (19:16)
That’s when we’re going to see true ITOT convergence, where people start to rethink the operation. Sorry, I get excited when I’m talking about this, as you can probably tell.

David Ariens (19:25)
That’s, I was going to say that I like that, but it’s, think we need more people who really get excited ⁓ about ⁓ changing the status quo. Let’s put it that way. Let’s put it that way. So Willem and I, did a presentation a couple of months ago where we identified to, or we talked about to, in our opinion, critical problems to…

solve, overcome, when we really talk about this AI which becomes more than what we see today. And that’s, you mentioned data, but we didn’t really touch the digital twin concept. Today, you don’t have a digital twin in many cases, you have all kinds of separate systems. whether or not, even if you would take that data and you would just bring it into a cloud system, it would still be 20 different.

databases, to put it really black and white, of much, much, more indeed. So there is this problem of integrating towards one digital twin, and then the second one, maybe we can touch about it on that later, the understanding the physical reality of what is going on for the physical assets, basically. So do you…

Willem (20:24)
If not much, much more, I mean…

Dan Jeavons (20:41)
Yes. Yes.

David Ariens (20:48)
Do also see these two problems or do you see other problems or what is your opinion on this digital twin and physical ⁓ modeling problem?

Dan Jeavons (20:55)
So I agree with both of them. I’ll add a third one for you. So the third one is also change management. And we can probably pick that up separately, but let me, let me touch on your two problems first, because I think they’re spot on. I think the, so if we, deal with the digital twin side of things,

David Ariens (21:00)
Yeah. Yeah.

Dan Jeavons (21:15)
Digital twin, so I’m going to first of all define what do we mean by digital twin? Because a lot of people are out there saying I do digital twin technology. The reality is I haven’t really found anyone that does what I would describe as a true digital twin. And what I mean by that is for me, a digital twin does three things. It’s able to physically represent, digitally represent the physical reality, both in terms of three dimensionality. ⁓

and also in terms of data, it’s able to be interrogated in real time by a human to understand any given question within the context of that in terms of what, and it’s able to run advanced simulations that explain why and what next. For me, that’s what you have to be aiming for. If you’re not aiming for that, you’re…

David Ariens (21:58)
Mm-hmm.

Dan Jeavons (22:12)
you’re doing some kind of copy of the site digitally, or you’re creating a new 3D visualization, or you’re creating a new simulator. The whole idea of Digital Twin is it has to alter the operating philosophy, and you have to be able to run your work processes from within the Twin. So I think for me, you’re right in that the fundamental challenge of that is the fact that to get there, you have to be able to make

David Ariens (22:19)
Yeah. Yeah.

Dan Jeavons (22:40)
the digital copy of physical reality, which involves being able to deal with three-dimensional space plus disparate documents and data sets. That’s fundamental. Now, my argument is that’s not so hard anymore. So if you go back two or three years, that was a really difficult problem that was unsolved. Largely because, you know, the machine vision wasn’t there. The natural language processing wasn’t there. The capabilities of foundation models weren’t there.

David Ariens (23:08)
Mm-hmm.

Dan Jeavons (23:09)
And as a result, it was very, very difficult. And what’s interesting is, and I won’t name names here, but there was a whole generation of companies that sprung up trying to solve that problem using last generation technology. And by the way, with some success, I’m not saying those things weren’t successful. They moved us forwards. I think what’s important to recognize now is that with modern AI, there’s almost none of those problems that aren’t solved. So, you know,

within applied computing, have the ability to ingest the P and ID and extract the full information from that in real time. you know, three dimensional reconstruction using nerf technology is there. It’s doable. You can do that right now. most facilities have 3d scans now in a, in a very different way. Nvidia omniverse is building technology where you can create high fidelity physics based simulations. So the technology exists. I just think we haven’t stitched it together yet.

And again, that’s one of the missions that I see for us at Applied Computing is to do exactly that. The second part I think is also true, which goes further, which says, okay, to get to the full digital twin, you also need to be able to replicate those physics based simulations, because that’s what gives you the why and what next. So statistics might be able to tell you what to some degree, but the why and what next requires an understanding and a grounding.

David Ariens (24:24)
Mm-hmm.

Yeah.

Dan Jeavons (24:33)
in the standard operating procedures, the engineering know-how, the chemical engineering processes and so on that form the foundation of the processes that these facilities operate. So for me, the thing that’s going to happen next, which is again, what we’re working on applied computing is the confluence of digital twin technology, which allows you to bring together full data and full physics in an AI foundation model.

And with that, which I know is possible because I see it every day, I see the path towards a true transformation because you can really rethink how you do things on the basis of a technology like

Willem (25:18)
You mentioned the third one. I think it’s a really good one about change. So if it’s possible and it’s so wonderful, why isn’t everybody doing it? Why isn’t everybody as convinced that we should bring this in? Where’s the bump in the road that you meet most?

David Ariens (25:23)
Yeah.

Dan Jeavons (25:38)
So it’s really interesting.

I’m just going to play devil’s advocate to my own argument here, because I think it’s, I think it’s really important to recognize the weakness in your own position. The counter argument to everything that I’ve just said goes along the lines of these facilities are extremely risky. Small. If you make small changes, things can go horribly wrong. We know full well the consequences of those things. we’ve run it for a very long time. We know how it runs. Let’s not change anything.

Let’s just do as little as we possibly can and keep the kit running and make small incremental improvements where we can do those. But our foundational raise on debt to our as an asset is to provide the agreed return on capital to our shareholders. And the best way we do that is we run the asset as safely and conservatively as possible. And the other element to that is what we also want to do.

David Ariens (26:24)
Mm-hmm.

Dan Jeavons (26:44)
is maintain the expertise in our engineering disciplines, which means we need to run them as disciplines and not have people cutting across multiple different problems sets, that distracts them and means that they’re not focused on the problems they’re supposed to be focused on. ⁓ Now, if you play that out, it’s quite compelling, actually, as an argument set, right? And if you’ve done it for 40 years, and it’s worked just fine,

It’s even more compelling because the incentive to change is very, hard, which is the reality of a lot of these sites.

David Ariens (27:16)
Mm-hmm.

Dan Jeavons (27:23)
I think if I, so when I come up against that, question of why change, think is really fundamental. And I think that is the biggest challenge of how do you get the narrative right to help people understand the change journey that they have to go on. And not just the early adopters and the Mavericks, but you have to take senior leadership with you on a journey. That’s going to, that’s going to get you there. And I think that the, the way that I narrate this is, is twofold. Number one.

Even with the way that you run things today, we still have significant issues. We still have significant reliability problems. And we know that we’re not running the site as optimally as we could. And anyone that’s honest, who’s worked in one of these facilities knows that all those things are true. And actually that there is safety exposure still within the systems that we run today. And, you know, we, and we see that right in the results and the things that sadly still happen.

The second thing, which I think is equally important is that most of these sites are under enormous cost pressure right now. And so they can’t afford to keep running it in the way that they’ve run it for years for a variety of different economic and social reasons. The third angle to this is the fact that we’re losing the expertise, which is increasing the risk because we have a lot, we have a crude change going on where a lot of the people who have these decades of experience are leaving the industry.

David Ariens (28:32)
Yeah.

Mm-hmm.

Dan Jeavons (28:54)
And the final one, which is the real kicker is someone’s going to do it. So someone is going to figure this out. And when they do it, they’re going to be 50 % more efficient. And if they do that, if you’re not on that train, when that happens,

You know, it’s going to be hard. Let’s just put it like that. And so I think for me, there’s an imperative for leadership in these facilities to think about what’s about to happen in the next 10 years from all the angles that I just described. Not ignoring any of the counter arguments that I’ve put forward, but think about on the basis of all of this, what type of system do you need to put in place and get ahead of that? So you understand what you’re looking for so that then when it comes, you’re ready for it.

David Ariens (29:34)
Yeah.

That’s a very interesting insight. that same, let’s say, risk averse mindset, we have to also mention the clouds. ⁓ It’s also being seen, and I would even let, I’m also going to be the devil’s advocate here, rightfully so. We’ve seen some in last weeks again, some issues with Cloudflare and some issues with AWS, et cetera, where…

Dan Jeavons (29:59)
Yes?

David Ariens (30:14)
entire applications, entire regions ⁓ became unavailable or at least highly disturbed, which gives, I would say, an extra argument to many people. Like, we don’t really know whether we can trust the cloud for critical operations, critical applications. Like, where do you see that boundary? ⁓ Maybe also…

Again, just as a thought exercise here, I think until today, we typically have this saying or this way of working where we go like, okay, everything which is operations relevant stays on-prem as close as possible to the assets. And then the things we are moving to the cloud will accept some issues or will accept some downtime. How do you see that?

Dan Jeavons (31:08)
Yeah. So, so let me start with what I think is happening with the purgey model, right? ⁓ the, the, the, the, in my view, and again, I’m a little bit controversial on this, view that because you’re air gaps means you’re safe. For me, it’s just a complete fallacy. And I think most people would accept that now. we’ve seen enough examples of where the

David Ariens (31:27)
Yeah. Yeah.

Dan Jeavons (31:36)
the ability to create barriers between the IT and the OT layer using classical air gap methods is just not realistic. So you then have to say, well, in reality, I’m running a, let’s call it a virtual private cloud, because that’s really what it is. Because most of the infrastructure is virtualized anyway, within my on-prem environment, which as a result is exposed to all the same threats that the internet is now.

People may argue with me on this. I understand the arguments against, but effectively in my world that, you know, that’s where we are. Now, if that’s true, you’re then saying, I believe that my IT technicians that sit in my site are more equipped to manage the threats of the modern internet than Microsoft, than Amazon.

I just cannot believe that that is true. Right. I think that’s an insane statement because, you know, these, these are the companies that have the most sophisticated AI in the world. Do we really believe they’re not using that to help them protect their core business? Right. So, so you’re then saying I’m relying on my friends because I trust them and I don’t trust Microsoft.

And I just think this is a silly argument. Now there is an element to that, is true, which is, I want to just say the bigger problem for me is not the cyber arguments. It’s the argument of upgrades because the big issue you see, and you see it continually on a regular basis, you saw it with cloud flare again, the CICD approach to updating software, which has been inherent within the cloud-based architecture so far results in small, small changes causing problems sometimes.

Now, typically they roll back within a day, but that is an unacceptable scenario within a operating facility. And so I think the answer is the world has to meet halfway. We have, we have to find an operating philosophy where change control is more inherent than in a foundational cloud architecture. And we have to find an operating philosophy where we can take advantage of modern security architecture. Because if we don’t.

David Ariens (33:53)
Mm-hmm.

Dan Jeavons (34:02)
we’re going to end up in trouble. And by the way, that also means that agile patching becomes a requirement. So let’s get used to it, right? Because any patch can cause a critical failure to an IT system. If we’re doing regular patching, which by the way, to not do regular patching, everyone knows is insane. You’re going to have to get into a more agile change process with your OT landscape. It’s a given. So what I’m trying to say is there’s a rethink needed. There is some adaptation needed from the cloud, but there’s a

David Ariens (34:12)
Yeah.

Mm-hmm.

Dan Jeavons (34:32)
a significant level of rethink required from the conventional OT system providers to think about how they can provide more security to their customers in line with modern internet security. Because in the age of AI, that’s essential.

Willem (34:53)
I also want to add like, it’s also a very infrastructure view of security with the Perdu model. And honestly, there’s so many other attack vectors out there that they don’t give a fuck about whether your system is air-gapped or not. mean, supply chain attacks, they will come in if you do not have anything in place. So that for sure. ⁓ Going slightly back to the change part. ⁓

Dan Jeavons (34:59)
100 %

Absolutely.

That’s right.

Willem (35:21)
One aspect that I find interesting is maybe to look at what are things that companies should develop. Because I wonder sometimes, do they have the necessary muscle or skill to be able to implement those things? Because it’s not just about buying something off the shelf, installing it, and here’s my return on investment, which is a very classic engineering way of thinking that’s served them well for hundreds of years. It’s also about how do I use these tools in a way that’s more effective than the others?

You’ve seen many places, many companies. What’s your take on that?

Dan Jeavons (35:56)
So it’s interesting, right? My view, mean, and also this comes back to my own story of why am I, why did I ditch the life plan and go and work for a crazy startup? you know, I think the, the reality is that I think this has changed a lot in the last few years. So I would say the post 2023 era is a different world in the foundation model era to where we were previously. And the reason I, the reason I say that is, ⁓

If you take the era from about, I would say probably 2008, 2009 through to about 2023, these database algorithms were getting better and better and better. But the deployment of these architectures was focused on individual use cases. So in other words, you know, I can build a killer algorithm that can predict corrosion better than anything else. Or I can predict a killer, I can create a killer algorithm that can predict valve failure better than anything else.

David Ariens (36:42)
Yeah.

Dan Jeavons (36:56)
Now, if you’re a company like a shell or any other, you know, super major within those environments, you have an enormous amount of data to solve those specific problems. And it’s a very coherent argument to say, I know my business better than anybody else. And as a result, I can solve my problems using my data better than anyone else. And therefore having an in-house modeling capability that can develop models specifically for my business makes an enormous amount of sense.

And I was a big subscriber to that philosophy because I saw that a lot of the public domain work that others were doing was too generic for applicability and value driving it within our, within our, to drive value within our organization.

I think 2023 changed all that because what that showed is that the next generation of models are going to be general purpose. So in other words, you’re not going to have, you’re not going to have a use case for valves and a use case for compressors and a use case for optimization and three different algorithms supporting that with three different front end dashboards. You’re going to have one foundation model, which can solve all of those things because it’s going to understand the physics. going to understand the language. It’s going to understand the time series and it’s going to be able to predict.

David Ariens (37:47)
Mm-hmm.

Dan Jeavons (38:10)
And so for me, it’s, it’s that foundation model shift that means that the center of gravity for this type of problem solving is going to switch from internal to the enterprise to external to the enterprise. Because put simply, you know, someone who’s working with multiple customers and building a general purpose foundation model and learning from each of them is going to have a better foundation model in a company building their own. And it’s ultimately that realization that led me to do what I’m doing.

David Ariens (38:39)
Like, okay, so how do you build a general foundation model for, let’s not even say the industry, but let’s say for ⁓ crackers or for, I don’t know, whatever. how do you actually start with that? So I’m from, let’s say from my past, I started as an MPC engineer. So you’ve also seen MPC projects, not MCP.

MPC projects. So you know what you do is you that’s a multi-year project. You start to model all parts of the plant. You do design of experiments, you create soft sensors, And if let’s say if your ⁓ operating points change a little bit outside the boundaries you’ve created, you’re modeling, then it’s back to the drawing board. ⁓ So it’s like, in my mind,

building a model in the traditional way is like a super expensive, super complex exercise. But how do you generalize that?

Dan Jeavons (39:47)
Yeah. So it’s super interesting, right? And again, I’m going to intertwine my answer with my, with my story, because it kind of helps a little bit in this particular situation. this was also something I was struggling with, frankly. Um, and because I agree with you, I completely agree with you. I’m, I’m died in the wool in the same philosophy that you are. I, you know, I understand it deeply and that’s been the world that I’ve inhabited for the best part of a decade.

I think there was a realization, right? That, so let me sort of step back a little bit. We’ve been looking in Shell for a long time at the time at natural language processing. And it was probably 2021 where we suddenly started to see these transformers and then generative pre-trained transformers taking off. And just the performance benchmarks were getting smashed on a monthly or even weekly basis.

David Ariens (40:44)
Yeah.

Dan Jeavons (40:44)
And for me, that was a real alarm bell. It’s like, okay, crap, you know, as you will know, natural language processing was the holy grail of AI for decades, right? It was the one thing that nobody could crack. And the systems were horrible. Like they didn’t work well, you know, you got the silly first generation chat bots, it was horrendous. Exactly. Exactly. It’s been tried for years, right?

Willem (40:53)
Yeah.

David Ariens (40:57)
Yeah.

Willem (41:04)
Yeah.

David Ariens (41:05)
Clippy in my…

Dan Jeavons (41:13)
And so for me, know, seeing that generalizability of NLP, you have to start asking the question of, if that can generalize, what else can be generalized? Now, the other thing that happened simultaneously was we started to see the evolution of physics-inspired deep learning. I would still say that’s still quite early even now, but the idea of training a transformer architecture on effectively a set of generated outputs to create a synthetic simulator.

I got very excited about and the results started to come in to say, this is clearly a hundred thousand, a million X faster than your conventional, conventional simulation methods, which, which brings you to a point where this can be run in real time, which was always a big blocker. and then the final thing was just the transformer architectures for time series. You know, whether you look at developments in scikit learn or, or, or, or elsewhere, the forecasting algorithms were just getting better and better.

David Ariens (42:09)
Yeah.

Dan Jeavons (42:10)
You know, IBM were coming out with their granite series and you started to see the transformer architectures just starting to take off. So these three trends were obvious, know, generalizable physics, generalizable language, generalizable time series. I’ll be honest in saying, I spent a lot of time over those few years scratching my head going, how the hell do you put this together? ⁓ and I, I thought about it.

Willem (42:35)
you

Dan Jeavons (42:40)
We run projects on it. We tried things, nothing worked. And then a guy who used to work for me, ⁓ one day rocked up at my office in Bangalore. A gentleman by the name of Sam Tucker, who’s co-founder and chief AI officer at Applied Computing. And Sam just said, I’ve got something to show you, have a look. And he stuck me in front of a screen and what he had done.

was he’d figured out how to make those three things talk to each other. And then he, and then he put an empty hallucination, an empty hallucination reasoning layer on top of it. So effectively he built an agentic system, which cross validated itself between those different disciplines and then made sure that it didn’t error. Now what’s really interesting about that is suddenly you, for me, it was a life. was, it was a number of things all in one go.

⁓ it was a moment of sheer frustration and irritation because he’d done the thing that I’ve been trying to do, ⁓ and did it way better than I could. There was a little bit of pride because the guy was part of my team. And at least I might’ve had some semblance of input at some point along the way. And the third thing was the, wow. Now I can do. And then suddenly the list started coming. And, and, and let me, let me just.

David Ariens (44:03)
Yeah.

Dan Jeavons (44:07)
play out why it’s so cool and why what Sam’s done with Orbital is so cool. If you think about a language model, a language model can be easily used to create context because a user can create context. You say, I want to deal with, and you talked about a unit, the CCR unit, right? For example. So straight away, you create a bounding context around the questions you want to ask. It’s very easy. The physics for that is pretty well understood.

And if you can estimate the physics on the basis of the time series, you can actually create a very accurate and explainable set of predictions when you pair those two things together. And then if you take a feedback from the output of that and you push it back into the language model, suddenly you’ve got context and explainability because physics can be explained on the basis of scientific literature. So the knowledge of the physics is actually embedded in the literature searches itself.

David Ariens (44:55)
Yeah.

That’s interesting that you really start from literature.

Dan Jeavons (45:04)
And then if you, you

start from literature, you start from literature. And by the way, the physics is all in the literature too, because all of those nice little equations that sit in those scientific papers, back to my point on machine vision, those can all be extracted, stored, processed, fed into a physics model, and then used to improve. and then on top of that, the other real kicker is then to say, LLMs are actually really good at marking homework now. Right? So,

A lot of people are using them for that. And so you can train an LLM with a combination of publicly available literature and expert feedback to act like a teacher and to say, go away and check your work. And so if you have a powerful set of three models that are generating highly predictive content and an anti-hallucinate nation layer, which is putting skepticism like an engineer would on top of it, suddenly

Willem (45:57)
Hmm.

David Ariens (45:58)
Yeah.

Dan Jeavons (46:01)
I would argue you’re in the world of generalizability. And that’s exactly what we’re proving with every deployment that we’re doing right now at Applied Computing, which is this can do reliability engineering, can do maintenance engineering, it can do integrity, it can do safety, it can even do economics because actually it’s quite good at that. And if you think about a world where the AI can do that, the question then comes back to where I started the conversation. How do you rethink your ways of working?

What does that do to the way in which you operate your plan?

Willem (46:35)
think that the question is going to come on the table much faster than some people think, especially if you’ve been working in the same way for the past, let’s be honest, hundreds of years. I this is in their DNA. I think, David, we do have to wrap up this podcast, but I would be extremely interested to get together in a couple of years and see if actually already that change is happening.

Dan Jeavons (46:40)
I agree.

Absolutely.

Willem (47:05)
operations and engineering reviews their role actually. Because if what you’re saying is happening, first of all, I’m super excited about the mathematics about this. would be. Now I do have to look up about transformer models and physics because I knew them more from language processing. But no, sounds great. I’m very excited to see what’s going to come out of it and how it will affect our lives in production.

Dan Jeavons (47:33)
Well, I was gonna say thank you to you guys. mean, what you’re doing is amazing. think the way you’re kind of driving the change is fantastic and absolutely crucial as part of the change management and just love the conversation. So thank you so much for having me on.

David Ariens (47:46)
Yeah, the pleasure is all ours then. ⁓ That’s indeed a wrap for this episode. We have to reconvene. We’ll have to plan for some kind of a…

Willem (47:55)
Yes, yes, yes. I want to see… Let’s look back

in a couple of videos. ⁓ how naive were we then, David. Yes. That’s how we did things back then. So cute. So artisanal.

David Ariens (48:01)
So cute. So cute.

Okey dokey. Good. Dan, thank you so, so, so much for joining. Super insightful. Time flew by. Also to our listeners for tuning in again, if you enjoyed the conversation. As always, don’t forget to subscribe at itotinsider.com. Discover our trainings via ⁓ itot.academy. ⁓

Dan Jeavons (48:10)
Hahaha! ⁓

David Ariens (48:30)
and see you next time for more insights on bridging IT and OT and until then take care bye bye