Join the Insider! Subscribe today to receive our weekly insights

?

David (00:00)
Welcome, you’re listening to the ITOT Insider podcast. I’m your host David. Subscribe to get the latest insights shaping the world of industry 4.0 and smart manufacturing. Today I welcome Wilhelm Klein. Wilhelm has an academic background in sustainability and technology and AI ethics, and he’s the co-founder and CEO of ZetaMotion, where he focuses on bringing AI solutions to the world of quality control.

Wilhelm thank you for joining me.

Wilhelm (00:32)
Thank you very much for the invitation, David. It’s a real delight to be here. Look forward to the conversation.

David (00:38)
Yeah, so why don’t we start with, I would say, just a short introduction. I’d like to know a bit more about you. So why don’t we start with that?

Wilhelm (00:47)
Yeah, happy to. So I’m born and raised in Germany. I’ve grown up to be a kind of micro technologist. I was raised on Star Trek and Lord of the Rings. think that’s about the combination. And I was, I don’t know how old, maybe 11 years old or so when I completely took apart my parents’ vacuum machine and then rebuilt it and it was still sort of working.

kind of setting me off on this sort of tinkery path. But I’ve also supplemented that with a kind of deep interest in not just how technology works, but how everything works essentially. that has, you know, through some interesting side tracks, again, following interests in different areas has led me on an academic path.

towards technology ethics and sustainability. And yeah, got me on that track where I ultimately ended up doing a PhD in probably one of the most cyberpunk places on the planet, which is Hong Kong, combining the highest tech with some of the lowest living conditions you can find anywhere. We have also spent some time.

heading the hacker lab called Dim Sum Labs in Hong Kong. And at the same time doing some of the, well, not earliest, but earlier work on tech ethics, AI ethics, and so on. And most recently I’ve supplemented this following of interests by founding a deep tech startup, which is applying artificial intelligence.

particularly in the machine vision space, helping manufacturers and other quality providers automate the quality control. So that’s the extremely quick run through what accounts to many years of interesting paths that I follow.

David (03:03)
Interesting. In my case, it was more the A-Team who was a very important part of my younger life. always duct tape to the rescue, I would say. But this is super interesting, especially the ethics part. So it’s actually also the first time we talk about ethics in this podcast. So I’m really looking forward to that. So…

Wilhelm (03:07)
And MacGyver. MacGyver. Very important.

David (03:30)
Let’s just dive into the topic of artificial intelligence. Is the hype still going strong or is reality kicking in? What do you think?

Wilhelm (03:42)
So the hype is still going strong and justifiably so because it really cannot be underestimated how important recent developments and developments that are just on the horizon are on a personal level, on a business level, and then also on a societal level. And indeed, I believe the next…

five to 10 years are going to be nothing short of revolutionary in terms of what we’re going to see. mean, if you just everywhere in the business world, of course, everywhere you see how AI is described as a horizontal, horizontally applicable technology. That means it’s virtually no application.

space is virtually no area in the operation of a business where AI might not be able to bring about significant improvements. And then similarly on a personal level, if you truly engage with some of the latest tools on the simplest level that would be using something like a chatbot, like one of the larger LLMs.

like chat GPT can significantly streamline some of the daily tasks, be that emailing or setting up particular schedules or planning your next vacation. But also, mean, on a vocational perspective, it’s really quite fascinating to see how many of the tasks that used to be manual can now be automated.

Yeah, I mean, I have lots of nieces and nephews who are now moving towards graduating in the next couple of years or are about to graduate in that very interesting space in the next five to 10 years. And they’re thinking very seriously about what trajectory they might want to look at, right? Because it’s probably quite clear that a significant chunk of jobs are not going to be around.

any longer in the next five to 10 years, because it’s much easier to automate them. And then finally, finally, finally, on the societal level, I really I mean, we’re currently sort of dividing between the AI optimists and the AI doomers. But and both of them have have very good arguments. And what I find quite interesting, I mean, it’s really difficult to to to present

David (06:12)
Yeah.

Mm-hmm.

Wilhelm (06:35)
truly convincing arguments against the potential detrimental consequences that we might see in the end. But it kind of depends on what your area of focus is. If you’re looking to preserve status quo, then there’s a non-unsignificant chance that that is significantly risked.

David (06:44)
Ha ha ha.

Wilhelm (07:00)
But if you’re fine with embracing a future that might be quite different, but still preserves the light of consciousness, then there’s not very little to worry about.

David (07:09)
There is this interesting thing where, just yesterday or a couple of days ago, I saw this movie or this video from the BBC and I think it’s about, I don’t know, 25, 30, 35 years old, where they predicted the future. So it was before even the personal computer. And in this video, you saw this device which somehow resembles a laptop.

and they predicted the future. Now we’re doing the same again. So people today are predicting what will happen in five, 10, 15 years. Now, a remarkable thing from that video was that what they predicted to change were the very, I would say what we consider very simple tasks. Things where, I don’t know, like you have a robot who does the dishes or…

Things we perceive as something very, very simple. Now we’ve actually seen, I think the opposite’s happening. We’ve seen automation of very complex tasks and not the simple ones. What’s your take on this?

Wilhelm (08:23)
Yes, indeed. This is an extremely fascinating phenomenon. And it makes perfect sense if you look at it from a human intuition perspective. And then at the same time, you look at it from an evolutionary perspective with regards to how neural networks work and how the human brain works and the time that our particular neural networks had to train.

and refine themselves. So from these perspectives, it makes perfect sense to have the strange reversal of predictions. So as you rightly outlined, the predictions, if you go back several decades, about where we might find automation first, with what we consider simple task, we consider maybe menial, low skilled labor is sometimes.

the unfortunate label for this. And it describes things like the proverbial flipping burgers or being a postman or milkman delivery. These were usually considered very low, low scale types of work. And surely, course, whatever is low scale is going to be the first thing that robots or an AI might be able to address.

because for us, that’s the easiest thing. So surely we’ll start with the easiest, right? But the reality is that that completely disregarded that the things or what makes these tasks easy for us is because evolution has done all of the hard labor for us already. Because evolution has already pre-trained our brains to be really good at physics, to be really good at

world modeling, essentially. So all of us already come with this extremely capable pre-trained world modeling engine in our brain, which creates these really, really high fidelity, what is essentially a game engine, right? It’s a game engine that takes in sensory data and creates our perception of the world, which is not the real world, right? Nobody sees the real world. We’re all living in a dream world.

David (10:37)
Yeah, yeah.

Wilhelm (10:46)
We’re all living inside our perceptually. We’re all perceiving a simulated game engine version that is built on the basis of what we can take in. And that is already very, very fine tuned. And that we still have to teach to the machine. And that’s the big difference, why it’s so difficult in comparison for a machine to learn all of this, what we had.

thousands, hundreds of thousands, or even millions. mean, it goes back a very long time to create these networks. We still have to figure that out for them. And that’s why for us, it’s easy, but for computers, it’s extremely, extremely difficult.

David (11:21)
Mm-hmm.

I mentioned that I grew up with the A team, but I grew up with the A team and then the matrix. I actually, immediately got this matrix vibe. But yeah, absolutely. It’s actually, it’s interesting to see, for example, and that’s maybe also an interesting bridge to, let’s say computer vision. So, well.

Wilhelm (11:38)
yes.

Yeah, it’s true.

Mm-hmm.

David (12:01)
have two questions here, let’s start just with computer vision 101. And then let’s take a step to I say also the complexities of how to interpret images. But why don’t we start with computer vision 101? What is computer vision?

Wilhelm (12:07)
Mm-hmm.

David (12:24)
Where do you see applications? What can people know from their personal life or their business life or maybe apply that to manufacturing cases? Does that make sense?

Wilhelm (12:33)
Yeah, no, perfect. And it still fits exactly into what we were just talking about. Because computer vision is exactly that ambition of trying to recreate what comes really, really easy to us, which is to see the world using sensors, in our cases, our eyes, and then interpreting what we can see and making sense of things. So for a human, if you have a

this like you have a mark in front of you. And I tell you, grab that mark and put it on the other table. For you, that’s a trivially easy task, Because obviously you can identify what is the mark. You can differentiate the mark from the rest of the world that surrounds it. You can estimate where it is. You can easily steer your hand towards the mark and place it somewhere else.

David (13:14)
Yeah.

Mm-hmm.

Wilhelm (13:31)
because again, your brain is already capable of doing that. A machine is really struggling with that because currently in the computer vision world, you have to train it for very long time. And especially whenever it comes into a setting, a computer that it hasn’t seen before where there has been no trainings that provided, it can easily struggle and it can easily misidentify.

David (13:58)
Mm-hmm.

Wilhelm (14:00)
and it might reach next to the mug or I might accidentally crush the cup because it has miscalculated exactly where it is or how much what the distance is and so on. So machine vision is the attempt of trying to get as close as possible to a full recognition or a full assessment.

of the world, including all of these parameters, like 3D, six degrees of freedom tracking of objects, motion capture, and very, very reliable object recognition. And that can then be translated into different areas of application.

David (14:46)
Mm-hmm.

And that’s, I would say what we see, I would say in our personal life, it’s like these vision algorithms on our phones, or you now obviously also see them as very important parts of self-driving cars, for example. And I’m always wondering…

Wilhelm (15:07)
Mm-hmm.

David (15:15)
How far are we in these developments? How do these algorithms work? How do they recognize another car? Or how do they know what are the streets or the traffic lights?

Wilhelm (15:34)
Yeah, yeah, again, we’re still in the same realm. So it’s all about providing, having a clever neural network that can be fed with sufficient data. And then it recognizes statistical patterns. It recognizes patterns that might translate into repeatable and recognizable concepts. And these concepts can be

applied to create world models, essentially. And the more universal and generalizable you become in your ability to model the world, the better you can deal with the general world. And that’s exactly the reason why self-driving cars are so far from still being applied, because literally, you have to learn the entire world, because every road looks different.

again, if you limit it to a few roads, then we already have some very good examples, Or cities, you can create enough data, collect enough data from single cities that you can already have pretty decent self-driving cars. But for general self-driving cars, we’re still quite far away. And that’s also why the postman is probably one of the very last jobs that can ever be automated.

David (16:39)
Yep.

Mm-hmm.

Yeah.

Wilhelm (17:01)
because every single mailbox looks different. There might be a staircase that leads to it. The lid looks different. The door ring, the bell, everything is always completely different. So accounting for that would be extremely difficult if you tried to do it with data.

David (17:09)
Yeah.

That’s gonna be

a very strange day. The first day robots is ringing my doorbell. But if I understand it correctly, we’re still very far away from that, right?

Wilhelm (17:22)
You

We’ll have to wait for why.

So yes and no. We are very close when you have very, very specific areas of application. So if you have a vision system that is only meant to do something very, very specific, we’re already very close. And we have extremely capable systems that are way beyond the capabilities of what a human can do. So again, we get back to this generalized versus very specialized skill.

So what someone, for example, thought or what we thought decades ago was one of the last things that would be automated would be something very difficult where you need decades of expertise, like studying X-ray images. We thought, that’s going to take a very long time because it takes so much skill, so much labor and expertise. But it turns out it’s so narrow. It’s so much

denoised, there is no world modeling that is required, you can very, very closely just look at it’s even just two colors, right? You don’t even have to differentiate between colors. So this is perfect. It’s beautiful for machine vision applications or for AI because it’s so simple. And in such cases where you can sufficiently

simplify the world, that the AI doesn’t have to remodel the entire world to make sense of what it is seeing. it can create these elaborate patterns that it recognizes and the concepts about it. And from there, establish what you might call a semantic understanding, where it understands the deeper connections. Wherever you have these,

these cases where an AI can do that, we are already at the level where we are surely beyond human capabilities.

David (19:29)
And if we apply this specifically to the industrial ecosystem to manufacturing, you just highlight a couple of typical cases where computer vision makes a difference today?

Wilhelm (19:47)
Yeah, there’s several. think one that is rolled out very, very significantly is in workforce safety, for example. And that already has a tremendous impact because many machines now have these vision systems connected to them and it’s directly coupled to safety switches. So whenever a worker is still standing in an area that is not safe during operation,

the machine will refuse to work essentially. Similarly, will be alerts even if, let’s say on a construction site, which is a very cool tech, it recognizes that someone is not wearing their helmet. Then essentially alarm goes off and the worker is notified to please put on the helmet. So this would be a popular area of application. Then barcode reading.

serial number reading, these kind of items are probably the most common ones. And for a very long time, very simple quality checks have already been implemented. And most recently, we’re getting more and more applications of quality control as well, going more and more complex. So veering more and more into these areas where you need to have a deeper…

understanding of what’s going on with a lot more noise. So you need to be closer to a general solver. And that is something that is happening because the capabilities of the AI systems are increasing.

David (21:33)
because you now mentioned quality control. think it’s also an interesting topic to take just to also a bit more time to define what quality control means. Maybe, and I think there are also two dimensions here, quality control has an impact on the operations itself, but that’s in many cases also because of the human who is…

consuming the either eating what is being produced or holding or whatever. So could we just go down that rabbit hole for a couple of minutes?

Wilhelm (22:17)
Absolutely, there’s different dimensions to it. On the one hand, quality control, on the simplest understanding is it controls the quality of the product. The purpose is to make sure that whatever comes out in the end or is passed on to the end user is of high quality. That’s the simplest application and that indeed is extremely important for the end user because it means that you get a high quality product.

If it’s a food, it’s something that is nutritious and doesn’t poison you. If you’re buying some product, you want it to last for a very long time. For the industrial application, quality control is also important from the quality assurance point of view, which means you want to ensure that the quality is at acceptable level throughout the production line as well.

because that has consequences for the production line health, has consequences for your efficiency, for your go-to-market, for your yield, and then also in terms of your environmental footprint. Because if you can ensure high quality of what you are producing at every production step, then you can make sure you are producing faster, you are producing more.

And you’re also wasting a lot less resources and less energy in the production of these. Because if you do not have proper quality control throughout a production line, you can easily continue to put value, resources, and energy into products that already carry a critical defect. And then if you haven’t done any checking and you’ve finished the product and you only do end of line quality control,

David (24:06)
Mm-hmm.

Wilhelm (24:14)
and you have to discard it or you have to rework it, there’s a lot of face stitch that happens.

David (24:22)
It’s also, there is this idea some people have that every product should be, I say, should be as good as possible, right? Where, so I have a mathematical background, so I will say no, the quality control is about you want to produce up to a certain standard, a certain spec that might not be perfect. It might be, yeah, whatever the customer considers.

good enough or whatever the spec considers good enough. Have you seen applications where, for example, manufacturers were producing a product which was too good or not good enough? Do you have some experiences here to share?

Wilhelm (25:13)
Yeah, absolutely. mean, perfection is always impossible. There’s no such thing as perfection. even if you had something that is extremely simple, let’s say you’re producing a perfectly squared cube of aluminum, which is a beautiful material because it can have a perfect uniform surface. The dimensions can be very clear. Even there.

If you’re producing 1,000 of these, by the time you’ve produced the thousands version, the inserts that you use on your CNC mills will have worn out by a tiny fraction already. So you’re probably 0.0001 millimeter out of the measurements that you wanted it to be. So perfection is never reachable. And it’s also virtually never required.

because we know that there’s always flaws in everything. But different industries have indeed a lot of different tolerances. And in some industries, it’s fine if you have a millimeter tolerance. In some of them, 0.01 millimeters is already unacceptable. We have one of our customers, which is perhaps a great example, is creating a

very complex glass composite panels that go into aircraft. So we are in a space, aerospace, which is extremely, extremely cognizant of quality, both visually and aesthetically, but then also in terms of safety. And it’s very granular in what is allowed and what is not allowed. So on the quality control of these,

glass composite panels, we’re looking at more than 40 or 50 different quality parameters that define what kind of defects could be present, what the classifications of these defects are in terms of size and category. So is it a bubble? Is it a scratch? Is it an inclusion? Is it an FOD? Is it a crack or glue residue?

And their customers have different requirements of these. the most difficult of their customers are giving them even stricter rules. per panel, you are allowed to have one bubble and two scratches. But the scratches cannot be bigger than 0.2 millimeters. And they cannot be closer to each other than 5 centimeters, something like this.

very complex rules, very complex tolerances and allowances. I think this is quite uniform across many industries that you have very, very complex rules like that. And again, you need to have, if you want to automate this, simple anomaly detector doesn’t cut it, right? It’s not good enough. You need something that can understand these parameters and then take that into account and…

really mimic what a human does essentially.

David (28:42)
And I can assure you, mathematics is already playing a role for many, many, years in this domain. And I would say now the models become smarter and there is more computational power and so on. But one of the challenges I see is that it is sometimes very easy to do a small scale, whatever you want to call it, proof of concept pilots.

MVP, So it’s sometimes easier. Let’s take the panel as an example. I can imagine that you have these requirements from, I don’t know, like one customer, and you analyze the image, and you do some classification. after a while, you might get results. One of the very specific challenges I see, the more complex the mathematical models become.

the more complex it also becomes to go from a proof of concept really to an operational system which is capable of handling all kinds of different requirements or different panels or different compositions or whatever. So this scaling thing is really a challenge in my opinion. Have you seen that as well? Are there some, I would say, tips and tricks or do’s and don’ts?

Wilhelm (30:04)
yes, absolutely. And in fact, we’ve seen it so much that it has become part of my standard introduction of what we do. Because I always ask any manufacturer, anyone who approaches us or I meet someone at a trade show, I always ask them, have you tried computer vision yet? And by now, most people say, yes, yes, we’ve tried. And then you ask them, and how did that go for you?

That’s in their reaction, you can see the reality. Because in the reaction, you see the rise miles and the wrinkly forehead. boy, why did you have to ask that? Exactly. Because they were so proud already. And rightly so. I everyone who sees the is keen to not.

David (30:37)
Yes.

Yeah, they go like, yeah.

Wilhelm (31:00)
fall behind should be trying these things, but indeed everyone tries, but it turns out to be really difficult. So we’ve labeled this problem the last mile problem, the last mile problem of actually achieving very, very high levels of accuracy and performance in these systems. And that’s very difficult. So we plot that as an exponential curve essentially. So you get started.

David (31:25)
Okay.

Wilhelm (31:27)
and you get very promising results on the basis of relatively little effort. So with just some data that a lot of people already have on hand and a couple of hours of work, you might be reaching something that approaches 80 % level of accuracy. you think, wow, this looks, wow, if we got here this quickly, surely how much more difficult can it be to reach the 99 %?

David (31:46)
Mm-hmm.

Yeah,

yeah.

Wilhelm (31:56)
And then

you keep going and it turns out that, boy, have to, how much more data do you need? How many more hours do I have to sit here labeling and labeling and curating and retraining? And that’s this exponential increase in time, data and manpower that is required in order to push it over the edge. So this is what we see over and over again. this is one of the main difficulties of.

David (32:02)
you

Wilhelm (32:24)
using especially classic approaches to computer vision and real world data, that can really make it difficult.

David (32:36)
And I think you can probably, I would say, extrapolate that also to, I would say, applications of modeling techniques. And then it’s interesting, this exponential thing is really interesting because this is also typically where, you know, in the 80%, you typically have the clear cases where you go like, yeah, that’s whatever the result needs to be, but yeah, that’s clearly right. I would say it’s correctly classified or whatever.

Wilhelm (32:56)
Mm-hmm.

Mm-hmm.

David (33:06)
But it’s in the 20 % that’s where you get the acceptance by, for example, the QA staff or the operators, because they also need to somehow accept the fact that there is this, in their eyes, black box doing stuff and giving back results or maybe sometimes dictating what they should do. I’ve seen…

Wilhelm (33:25)
Mm-hmm.

David (33:36)
That’s a bit more my own background, but I’ve seen this very similar things when you, example, include when you, for example, start working around model predictive control in my background, the chemical industry, but there are more examples as well, where at a certain point in time, you see like, OK, now I’ve modeled this plant or this line or whatever. And at a certain point in time, the operators, start getting

new set points, either automatically or they still need to accept them. And in many cases, if you’re not really clear about why you calculated the set points the way you calculated it, that can actually provide some resistance where operators go like, yeah, no, don’t. No, that can’t be true. That’s impossible. But I really think that in my opinion, that’s in that last phase.

Wilhelm (34:22)
Mmm.

Mm-hmm.

David (34:35)
The question is, how do you overcome that? What do you do to make that exponential increase?

go better, faster, become more acceptable. Any insights there?

Wilhelm (34:48)
Hmm.

Well, yes, because it’s part of what we do in our business, solving this last mile problem. But there’s different ways of doing it. I’ll outline a few before I go into how we do it. Most importantly, think what the majority of providers are doing is trying to

facilitate exactly what you said. the kind of tying in of expertise of staff in order to make the collection, the labeling and the training and the refining easier. And to some degree we are doing that as well. But the new kid on the block that we are applying and that we’re not the only ones who are doing this.

Some things that we are doing differently from everyone else, but it’s not like we came up with this, but the new kid on the block is synthetic data. where the classic approaches require you to just collect lots and lots of data, curate it and upload it and then train these models, synthetic data allows you to work off of very little amount of data.

and extrapolate from there. So the way I usually explain this is that a synthetic data-based approach really mimics what a human does, essentially, what a human brain does. let’s, again, let’s stay with the mark, right? Let’s say you are a manufacturer of marks and you want to train a new quality control worker, a human. What would you do?

You would take the person to your quality control lab, and you would show him or her one or two marks. You would leave through the defect catalog, which lists a couple of defects with one or two example pictures. And then you spend some time showing a couple of examples. Like, look here, the spacing is not quite right, and it shouldn’t be touching. And sometimes the overlap is a bit more. Sometimes it’s a bit less.

David (36:56)
Mm-hmm.

Wilhelm (37:11)
And you keep going like this for maybe an hour or so. And at some point the human says, okay, get it. Yeah, I got it. Right. And from there on that that person is able to do the inspection for almost all cases. Right. And what has happened inside the brain of the human worker is imagination is extrapolation, or if you will, synthetic data, because the human imagines, right, based on that

David (37:18)
Yeah.

Mm-hmm.

Wilhelm (37:40)
semantic understanding and can extrapolate. And if you show that the human a mug that has a different design that has triangles instead of dots, the human will still be able to do that. And a synthetic data based approach can mimic that and can still do that while a classic system, if you have only shown the dots before and you now show it triangles, it will freak out and label everything as a

David (37:41)
Yeah?

Mm-hmm.

Wilhelm (38:09)
potential defects.

David (38:11)
but still I’m trying to get a grasp of how that synthetic data looks like. So is that like you try to generate the world somehow, well, the world in this case, MarksEbots, but you try to generate possibilities? Okay.

Wilhelm (38:30)
Think

of, again, the human is the perfect example. a lot of us had dreams of flying when we were children, right? In a sense, what we were doing there is data augmentation or is synthetic data generation in order to model the world from a different perspective, which allows us to better understand the complexity of it and the causal connections. And having this modeled

literal bird’s eye perspective allows you to do that better. So this is one of the more interesting interpretations of why this is a shared phenomenon between so many people that in our adolescence, we have a lot of flying dreams and they become less when we grow older. But essentially, whenever you dream, that’s still what you’re doing, you are modeling things. So this is a fascinating spin also on

maybe some of the more disturbing dreams that we might have because it’s literally your brain modeling and simulating potentialities. So you can understand the situation better. And in exactly the same way, the synthetic data approach is literally creating variations. So in the simplest sense, which is more or closer to data augmentation that we already know, right? It is taking

Let’s say if you feed it an image of the mock that captures just this, data augmentation just turns it, twists it, allows the computer to understand it better. So that would be data augmentation. And if you take it one step further, you give it one mock and you can use different tools to, let’s say, get started. You can either do a procedural generation that is just fully hard coded.

David (40:08)
Yeah, yeah.

Mm-hmm.

Wilhelm (40:25)
You could have rendering. You can use tools like Blender or engines like Unreal or Unity. Or you can, and again, this is the latest, use generative AI. can use tools. Yeah. yes. That’s right.

David (40:32)
Mm-hmm.

Now we are back to the gaming world from the beginning of the podcast.

you know, I’ve always tried to be like very optimistic and the future is bright, et cetera, et cetera. But, you know, if we’re talking about using generative AI, generating data, et cetera, et cetera, then there is also this flip side of the coin. that flip side is AI is using gigantic amounts of resources.

energy. So, so lately, there has also been more, maybe not in general, but a lot of people are also thinking about is the is the energy costs worth the results. So, and then we typically go into the domain of of of environmental responsible AI or green AI. So

Wilhelm (41:12)
Hmm.

David (41:39)
That’s maybe also a route I want to explore with you.

Wilhelm (41:42)
Yeah, certainly. Because also, as I mentioned, one of the aspects of quality control for manufacturers is already is also green consequences as in how can I use AI in order to optimize my production so I waste less resources and I waste less energy. And in the same way, you can

try to assess what other ways we can use AI for green purposes. And then on the flip side, how can we make AI itself greener? So this is very much fitting with the theme of a challenge that I recently had the honor of hosting for Tomorrow University as part of the Impact MBA, which is called the Green AI Challenge. And it’s looking at exactly that.

that question, how can we use AI for green purposes and how can we make AI greener itself?

David (42:42)
Okay.

And what are the, I would say, what are the other already, is this like in, is this a new topic or are there already like teams identified where, I don’t know, we should be working around this or that or that or.

Wilhelm (43:01)
Yeah, I mean, there’s again, lots of different organizations working on this as a big report by the United Nations that is looking at how AI can be used to achieve our sustainability goals. There are many startups that are explicitly trying to use AI for conservation purposes, for…

carbon reduction purposes. There’s a lot of industrial AI trying to help manufacturers reduce the carbon emissions. And equally, there’s a lot of companies trying to help those who are trying to apply AI to make their AI greener. this can start on a hardware level where computing resources can be produced sustainably.

and then also run sustainably wherever possible using solar power or sources of energy that are sustainable. then one level deeper on the software architecture side, there’s several tools that you can use in order to simply cut down on the compute resources required to both train and run.

David (43:59)
Yep.

Wilhelm (44:24)
certain models. So in many cases, earlier generations of AI may have required exorbitant amount of resources to train and run them. And you can really trim that down now to run on edge devices that are consuming a fraction of what the earlier models were using.

David (44:45)
Sometimes you get the feeling like, here is this, I have enough money, I don’t care, just run the model, run.

Wilhelm (44:55)
Mm.

Yeah, and especially right now because there’s so much of an arms race going on where it looks like the objective is to reach the goal as quickly as possible, not as sustainable as possible. I don’t think that’s going to be scalable for foreseeable future because we’re already hitting some hard limits in terms of, for example, availability of GPUs.

David (45:02)
Yeah.

Hmm, yeah.

Yeah.

Wilhelm (45:27)
I mean, there’s a good reason why Nvidia has gone absolutely through the roof. So congratulations to everyone who bought the stock early. But right now, if you try to buy a big cluster of GPUs because you want to do what some of the big players are doing, good luck finding that. It’s very difficult. I don’t think we can do that forever. And we will very soon have to look very deeply into how we can streamline and how we can make things more.

efficient.

David (45:59)
That brings me to the last topic I’d like to discuss here. That’s ethics. You have a PhD in ethics. What is the link between ethics and mathematical models?

Wilhelm (46:23)
There are several links and again, it depends on where, what area of application we’re looking at. Since we’ve talked mostly about industrial aspects, let me start with that perhaps. Because it depends also on where you apply it. But some of the more famous questions that are being asked are,

job replacement and displacement. In many cases, we might indeed have models or model driven robots that can replace quite a lot of jobs. But on the flip side, that can also be seen as freeing up labor that is very skilled for other areas that is probably happy or could be applied in other areas more.

David (47:07)
Mm-hmm.

Wilhelm (47:21)
more productively. So it’s not as simple as it often can be portrayed. And it also depends on how that, if there truly is job replacement, how that is managed. Another area is, again, depending on the industry, how much personal data or how much critical data

is involved. we are in a comfortable position that we don’t really have to care about personal data in our line of work quality control. Because they, you know, we’re not talking about humans, we’re just looking at products. But at the same time, if you look at it from a purely business perspective, we are collecting tremendously critical and important data.

David (48:08)
Yep.

Wilhelm (48:19)
for the business because it literally carries a digital twin of what they’re producing, right? And that can be quite important. So they certainly don’t want any of that leaked. So that is an ethical dimension to it as well. But yeah, there’s quite a lot there in the industrial space and the responsibility, especially when you are working with big data that involves humans and humans lives.

David (48:20)
Mm-hmm.

Yeah.

Wilhelm (48:47)
That shouldn’t be shouldn’t be understated.

David (48:52)
And is the

like this this this image of let’s go back to the matrix or or or the Terminator like is the Terminator coming?

Wilhelm (49:00)
Mm-hmm.

Almost certainly not. Almost certainly not. Because the Terminator, I mean, the Terminator assumes an almost human-like sense of right and wrong and creating something that’s almost purposely evil or, yeah. It creates too much of an anthropomorphic version, vision of what an AI is.

David (49:12)
You

Mm-hmm.

Wilhelm (49:38)
that might be a problem for us might look like. I think, I mean, the writings of people like Nick Bostrom are closer to where we might end up or where the real potential for difficulty lies because you get super competence that just doesn’t really require alignment with human ideas of what’s right and wrong.

And the example that is usually being brought forward is that when we build a building and there was an anthill there, we’re not removing the anthill out of malicious intent. We simply disregard the interests of the anthill because our interests or the complexity of our considerations seem to us so much more important compared to

whatever the interest might be of that ant colony, right? And similarly in that scenario, right? If you have a super intelligence that vastly, outperforms or has left behind any remnant of human like level of intelligence, it might reach a level of sophistication where it’s similarly disregards the interests of us.

the we disregard the interests of an anthill. So it’s clearly not a terminator, right? We’re not terminators in that sense. But from the perspective of the anthill, the consequences are still undesirable.

David (51:23)
But I do remember almost certain it doesn’t come.

Wilhelm (51:30)
Well, there’s a non-zero chance that we might find something. But what I think is really absolutely certainly going to happen is definitely a change in everyone’s life. all of our personal lives are already majorly influenced by artificial intelligence. Artificial intelligence is already pervading everything that we use. All of us have

these incredible little machines in our pockets, which have collectively signed us up to the greatest psychological experiments ever conducted on humanity. And we’re all sort of willingly participating without having any idea how, what the long-term consequences truly will be. Because every single app that you’re using by now has been touched by AI, every interaction that you have on social media, anything you watch, anything you consume.

has had some AI input. we’re already influenced massively, but that’s just going to increase so much more in the future. But it doesn’t have to be negative either, right? So on the AI optimist side, there’s a lot of very good arguments being made as well that…

You know, we might end up with something like what the early cyberneticists were talking about, talking about the machines of love and grace, right? It’s almost like a paternalist perspective where having very, very impartial, very competent systems solve some of the most complex resource allocation problems that we have for us and allowing us to just…

be living lives of self-reflection and self-development rather than having to worry so much about having roofs over our heads all the time. Who knows? The fun question, and this is a question that I kind of put in my thesis all those years back when I wrote my PhD is, where do we end up? Are we ending up with Star Trek?

David (53:40)
you

Wilhelm (53:54)
Or are we ending up with Blade Runner? What’s more likely? And you can have different opinions, whether you’re an optimist or you’re a pessimist, but it’s certainly going to be a very fascinating future that we’re walking through.

David (54:11)
This is the perfect ending of this super interesting episode. Wilhelm, thank you so much for joining this podcast. I’m sure many of the things we talked about will, people will, indeed, as you said, they see that in their daily life. It will also start touching or touch them in their professional life. So yeah, thank you very

very much for joining me and I would say to our audience, thank you for listening, make sure to subscribe and until we meet again. Thank you. Bye bye.

Wilhelm (54:53)
Thank you very much, David. It was a pleasure.

Bye for now.