Join the Insider! Subscribe today to receive our weekly insights

?

David (00:00)
Welcome, you’re listening to the IT/OT Insider podcast. I’m your host David. Subscribe to our blog ⁓ and our podcast to learn more about our work or go to ITOT.Academy to discover our trainings. And today it’s all about industrial cybersecurity. And I’m joined by Danielle Jablanski So Danielle, welcome.

Danielle Jablanski (00:21)
Thank you, thanks for having me David.

David (00:23)
⁓ So cybersecurity hasn’t been on our podcast yet, although super important. But I have to say, was actually the very first article we published on our blog was about the Purdue model. ⁓ So it is something which is really important, but we haven’t touched it on the podcast. So I’m super happy to have you here ⁓ on the show. So why don’t we start, Danielle with a short introduction. Who are you? What are you doing? What’s your background, et cetera?

Danielle Jablanski (00:50)
Sure.

So Danielle Jablanski I go by DJ a lot of times, sometimes jabs in the community. I’m open to new nicknames if anyone thinks of one. Most recently, I spent the last year in the office of the technical director at CISA, the Cybersecurity and Infrastructure Security Agency. We joke that security is so important, it’s in the title twice. And I focused on the OT strategy for products and services from headquarters and how that trickled down through the field offices, which were in the states in the US. There’s 10 different regions for CISA.

And what that really focused on was extending some of the ways we think about security and risk to OT without disrupting some of these communications that are 24 seven availability is really prominent, but safety is kind of the critical aspect of that. So a lot of that is less technical and actually more ⁓ aptitude. And I say aptitude instead of awareness, because that’s really where I find myself in my career is building aptitude. So what are the capacity that you need to build in your organization to actually utilize capabilities? And so that’s kind of where I like to focus.

Before that, was at Nozomi Networks for a couple of years, one of the continuous monitoring solutions in the OT space, working on their strategy and different market ⁓ emphasis and some of the weeds on the product a little bit as well. And before that, I was a deep diver into intrusion detection systems for industrial control systems. So I’ve witnessed that whole evolution of the product side. And now I’m in a consulting role. So I’m at STV. It’s a major engineering firm in the United States that predominantly builds rail infrastructure as well as water utilities.

and buildings like airports, prisons, you you name it. And we have an EPC role in the creation of that. So it’s been a really fun journey, but I love what I do.

David (02:26)
I can use so and and of course the company you know work with that’s real critical infrastructure or cybersecurity is is is paramount.

Danielle Jablanski (02:35)
Yes, yeah. And we’re kind of realizing that more and more that, hey, we have these integrators in these design firms that are telling you what to buy, what to procure, and how to implement it. And they’re actually doing parts of the installation as well, according to different regulations. And there are all these sectors that don’t have critical cybersecurity requirements, but should and know that they should and want to adopt better practices. But there’s no good ⁓ menu or roadmap out there for them. so.

David (03:00)
Yeah.

Danielle Jablanski (03:00)
the

investment for those engineering firms to say, let’s do this upfront instead of bolting on security at five, 10, 15, 20 years later, I think is we’re just starting to turn the corner on that culture change.

David (03:11)
Yeah, absolutely. when I look back, so I stepped into the cybersecurity domain myself 15-ish years ago. So at that point in time, no frameworks, brand new, super cool. But also I remember that it was still really, really easy to just look for SCADA connected systems on the internet. ⁓ I know I at some points by accident.

was entering a SCADA system of a dam somewhere in the world. I forgot where it was, but it’s kind of scary. ⁓ could we, before we do a deep dive in, I would say the more nitty-gritty details, ⁓ could we start or could you start by defining cybersecurity, then primarily ICS cybersecurity a bit more? would say as a… ⁓

Danielle Jablanski (03:46)
terrifying. Mm-hmm. Yeah.

David (04:09)
I think you also teach some courses so tweet it as a ICS CyberSec 101 for our audience.

Danielle Jablanski (04:13)
Yeah.

Sure.

So it used to be that ICS, term industrial control systems, really encapsulated the industrial space. And so that was the old umbrella term, I would argue. Now that umbrella term has extended to operational technology, which represents a broad set of technologies that covers process automation, instrumentation and field devices, cyber-physical operations, and industrial control systems. So all of the above. Of course, these systems are super prevalent across lifeline sectors, as we know them, energy, water, transportation.

but also process heavy manufacturing from automotive to food production and pharmaceuticals. And then there’s this kind of new brand of OT, which I call hyper-connected facilities with a ton of IoT. It could be drones, it could be tablets, it could be the baggage claim at the airport. And so that also extends to modern factories as well, where you have tons of sensors and robotics. So again, a huge kind of umbrella term, but it’s easy to break it down if you want to focus on any of those sectors or areas and what types of systems are really relevant there.

Primarily for industrial control systems, OT includes legacy controllers and field devices, which are those instrumentation components, the valves, the pressure sensors, all of those things that are out there controlling the real world. And those often have proprietary and insecure protocols. And so that’s kind of the crux of the cybersecurity issue we talk about, but it doesn’t represent the whole landscape. And there are actually some modern systems within the industrial control system space from the manufacturers that have more security. They’re just less distributed because the life cycle of these

legacy components are long term. ⁓ And so, you we hear things like programmable logic controller, distributed control system, human machine interface, and then SCADA, which you’ve mentioned, so supervisory control and data acquisition systems. And people get really hung up. They’re like, I’ve heard somebody say an HMI is a SCADA and no, an HMI is only a hardware and an HMI is a pizza software.

David (05:41)
Yeah.

Danielle Jablanski (05:59)
If you just break down the words, they’re exactly what they mean. So HMI is a human machine interface. So it can be an entire SCADA in a control room. It can be a software and it can be a tablet, right? When you go to get a car wash and an automated car wash, that screen is an HMI, right? It’s a virtualization sometimes, but other times you can walk around with one. And if that component is on a piece of hardware and that hardware only does that process, then that’s an HMI, right? So we get really hung up on words and…

and things like that. it’s fun to walk that back a little bit sometimes. ⁓ like I said, there’s modern versions of this, but the proprietary protocols are really where we focus. And those protocols are field bus protocols that have turned into ethernet communications when we replace the ⁓ RS cables that they used to be one-to-one communications. And so that facilitation of communication from wired devices and controllers to kind of this ubiquitous.

⁓ explosion of sensors and real time ⁓ operating systems for the, not real time operating systems, real time data for the actual processes is how we got more network connectivity, more remote operations and more distributed risk is how we kind of talk about it in security terms. So we can dive into that if you want, or we can talk about anything else.

David (07:13)
That’s a lot to digest, but an interesting first take and ⁓ there are so many questions I want to ask you, but let’s start with the basics. ⁓ So I already mentioned Purdue or some kind of segmentation model in the intro. For me personally, segmentation is still the

core, the fundamentals of securing ICS systems. I don’t know whether that’s your take as well or not. Could you also maybe explain it to our audience?

Danielle Jablanski (07:52)
Sure. For the last five or six years, I’ve always said segmentation is king. I still think it’s paramount. It’s the number one ⁓ security quote unquote control, but it’s really not a specific control. People talk about segmentation like a light switch that you can just turn on. It actually takes a lot of planning, a lot of implementation, a lot of meetings, a lot of investment and a lot of personnel discussions, right? Getting everyone on the same page. ⁓ And so,

there’s been this vague assumption that quote unquote, you can’t protect what you can’t see. And that’s become the visibility push for a lot of the marketing to, you know, get into these OT networks and do some security. And that’s fine. You also can’t do any type of root cause analysis. If you’re not incorporating your entire operation into your purview, regardless of the program, right? Whether that’s risk management, your insurance provisions, your HR planning, right? It all has to be encompassed. so

David (08:39)
Yeah.

Danielle Jablanski (08:43)
When we look at OT, it’s the same thing. so I see a lot of folks struggle with segmentation where they’ll actually spend a lot of money on a visibility tool and then they won’t have updated their firewall rules in five, 10 years or something like that. Right. So they have some traffic that they’re allowing in this network where they think, well, we have monitoring and alarms, so we’ll catch these, these various things. And so what I bring people back to is both the ability to do root cause analysis and what that looks like for your organization, how you want to design your network segmentation around that goal.

as well as, you know, what are your resources? So a lot of people say, well, I want to do this, this, this, and this. And it’s like, well, if you have one person whose part-time job it is to run OT, or even if you have a really, really dynamic cybersecurity team, but that team is very centralized and your operations are very distributed, you’re also probably not doing OT to the best of your ability. And I’ve seen that with really, really well-funded. So sometimes we put these like small to medium sized businesses in a corner. I’ve seen major companies globally that have no asset management across their OT infrastructure.

and they just rely on those distributed operations to kind of do their own preparedness, whatever that looks like. And it could be good, but they don’t have any central vision into that. So on the network security front, we’ve talked about the Purdue model a little bit. There are different ways to kind of slice and dice your architecture and build what they call layered defenses in the past, a defense-in-depth model, or a zero trust, depending on what strategy you want to articulate. And then you just go and execute that strategy. And again, the execution is based on your resources every time. And the last thing I’ll say is,

David (09:47)
Yeah.

Danielle Jablanski (10:10)
The way I tell folks to do this is an effects-based approach rather than a means-based approach. And I borrow that from the military strategy. It’s the only cybersecurity portion I’ll ever borrow from military because proportionality doesn’t map, Cyberwar is a joke. Asymmetry is a massive contributor to, quote unquote, cyberwar. And we used to talk about the nuclear cyber bomb. None of those things are relevant. Those aren’t going to work out.

But this effect space focusing on the effect of something rather than the means. you know, all of the CTI out there of every single, you know, capability of every threat actor is not actually going to determine for you the worst case scenario, given your systems, your configurations, your level of access, your vendors, right? Your third party risk management, all of that CTI in the world is not going to help you figure that out. So network segmentation is a great starting point because it tells you what you’re missing. It tells you what capacity you have and it tells you essentially, you know,

before you do any type of attack pattern analysis, how robust is this network and our segmentation and our access policies really, if you start to dig into it. ⁓ There’s not really good guidance out there on it. Actually, when I left CISA, I wrote a network segmentation draft. It should be published next year. I’m sure there’ll be some changes to it. It’ll be co-sealed by a couple of different countries, but I’ll give you a preview. It leads into defensible architectures. And the thing for CISA, we said is, they really pushed resilience from a cybersecurity perspective.

The OT version of resilience is operating under compromise. So I wrote this guide with that kind of theme in mind. So what does it take to build defensible architectures and what does that mean? Why do we still think of security as a cost center and how do we need to get beyond that for OT? What is the risk landscape? What are the threats to OT assets? And then, of course, include the Purdue model, a couple of other relevant risk management profiles.

And then I talk about compensating controls, which network segmentation is a selection of different compensating controls to make a more robust network, like we said. So hopefully this will be out soon. I also hope that it kind of lowers the barrier to entry for people doing their cyber programs and planning. I’ve seen this leap, and you and I talked about this before, this leap to, well, I want a pen test. I want an incident response. I want this, this, this, this, this, And I asked one of my colleagues this week kind of a similar question, which is like,

is this cart before the horse? Like, are people even ready for, you know, a pen test that might be 100, 150 pages long that tells you every single thing you might want to pay attention to for the next 10 years of your life, right? I it could take a really long time to address those issues where there are other ways to like, you know, bite off parts that you can actually chew and swallow for lack of a better analogy. So there’s a long answer for you, but I obviously really care about that topic.

David (12:26)
Yeah.

Mm-hmm.

you

We’ll get back to that. ⁓ So you mentioned ⁓ OT specific threats. ⁓ why don’t we… Most people, definitely know the things like Stuxnet and maybe something like the attack on the Ukraine energy nets in… Was it in 2015? I can’t remember. Anyways…

Danielle Jablanski (13:17)
the black

energy in Ukraine? Is that one?

David (13:22)
Black energy, yeah. But what are the threats ⁓ you’re seeing today? What are the things where you go like, this is where I would say operators should really pay attention to?

Danielle Jablanski (13:23)
Yeah.

So we kind of use risks and threats interchangeably, and that’s okay, right? That’s just vernacular. ⁓ To me, the threats kind of vary on a spectrum from opportunistic to tailored. So opportunistic threats are going to be ransomware, poor network segmentation, and lateral movement. What you talked about earlier, the ability to find SCADA systems online. Some of those systems have hard-coded passwords, or they have insecure remote access. Some of that low-hanging fruit is what I call the opportunistic side of the threat landscape.

David (13:55)
Mm-hmm.

Danielle Jablanski (14:09)
On the tailored side, have things like the InController incident that happened a couple of years ago, think it was 2022, or Mandiant reported extensively on the three developed tools that actually were targeting control systems and their proprietary nature and their functionality, which is the tailored nature of that type of attack. And so that tailored accounting for what are the systems, what is their function, how are they networked, and how can I get to them? That’s the other side of the spectrum. So that’s some of the threats, right? Not getting into all of the CTI landscape of

David (14:24)
Mm-hmm.

Danielle Jablanski (14:38)
You know, I used to call it, there’s that predatory sparrow group from Iran, I used to call it like pissed off sparrow, right? Like not getting into each and every one of the TTPs from every threat actor out there, but that’s the range. But on the risk landscape, which people also kind of refer to as threats, you have things like insecure connectivity to internal and external networks, including the internet. There’s actually another program at CISA called the Administrative subpoena.

David (14:50)
No, no, no.

Danielle Jablanski (15:03)
program that’s reaching out to ISPs to find control systems and then going back to the OEMs and saying, hey, we found these here, the IP addresses. Here’s the network connectivity that we can see from the ISP with our subpoena process. Can you go to your customers and tell them that maybe this isn’t the right configuration for them? ⁓ Weak authentication, clear text passwords, default passwords, shared credentials. That’s a big problem we see. ⁓ You know, not hardening systems when they’re installed and implemented. So ports left open that have that exposure allowing for direct connectivity or control.

David (15:11)
Wow.

Danielle Jablanski (15:32)
Default configurations, right? Limited configuration management or change management, misconfigurations, right? That’s right for different types of exploitation. We’ve already talked about flat networks. The proprietary protocols we talked about are actually insecure by default, typically. People say insecure by design. That’s not the same thing, right? Insecure by default and insecure by design are different vantage points. Nobody designed something to be insecure on purpose, right? It’s actually designed, quote unquote, purpose-built.

because these systems kind of serve as a Rubik’s Cube when you configure them differently in the wild. And so they can be configured for something really mundane, and they can be really secure and secure, and they can be configured for something in a process that’s really, really, really critical and has a safety component. And you build different architectures around that for that safety component, but it’s a similar system. So they are purpose built in that way. ⁓

David (16:01)
Yeah.

Danielle Jablanski (16:23)
We have some unpatched vulnerabilities. People really spend a lot of time focusing on the vulnerabilities. But again, because of that Rubik’s cube nature of these purpose built systems, focusing on those vulnerabilities might actually overwhelm your resources to the extent that you’re not actually able to do other really concerning security priorities. ⁓ It used to be the case that there was a lot of vendor specific asset details online. That’s actually gotten a lot better. ⁓ But you know, once something’s out there, it’s out there forever, you say. So we have to be careful of like the handbooks, the hardening guidance, the threat models that

David (16:36)
Hmm.

Danielle Jablanski (16:53)
do kind of have available. But also the lack of that is a glaring concern too. So if you are buying from a vendor and they don’t have a vulnerability disclosure process, you should probably say, a minute. So we’ve actually really leaned into some procurement guidance around OT. was a secure by demand document that we put out for just having asset owners and even their integrators ask the right questions of the vendors to say, hey, this might not be part of the RFP, but what does your secure coding look like? What does your memory safe language process look like? Even if it’s for

a subset of your products and you have thousands of products out there, like, are you trying? Are we getting better? Are we working on this? ⁓ And then, ⁓ you know, you have these complex interdependent supply chains, right? So business objectives are driving things like cloud first adoption. And, ⁓ you know, we’re prioritizing operational functionality, right? But sometimes that gets prioritized over security and it leaves really glaring concerns. And I can go to examples of that, but I’ll turn it back to you.

David (17:52)
No, please share those examples. Those are the most interesting ones. ⁓

Danielle Jablanski (17:55)
Yeah,

so there was this, I’m going to look it up so that I don’t represent it wrong. But so I just, like I mentioned, ⁓

started at SCV a month ago, actually, this consulting firm. And it was reported this week that there was a train ⁓ protocol that’s been vulnerable for like 12 years that’s out there. And people have known about it, but nobody decided to do anything about it. And so there’s literally articles now that said that the industry knew for over 20 years. And it’s not like some lobbying group was sitting around saying, hey, don’t pay attention to this for the last 20 years. was that, excuse my large dog, ⁓ it was just that

David (18:21)
Yeah.

Danielle Jablanski (18:34)
know, operational efficiency and getting trains from A to B is the name of the business. And so ⁓ I think it’s really difficult for leaders to go through and prioritize this back to my comment as security as a cost center. It’s really difficult to demonstrate the ROI of something that didn’t happen, which you could look at this another way. The last 30 years, haven’t been tons of exploitation of this underlying vulnerability. And so there’s two, there’s always two sides to the quote unquote argument.

I would never argue with anyone over this. It’s an issue that should be addressed. The vulnerability contains things we just talked about, zero authentication and zero encryption. ⁓ So a threat actor could potentially craft a packet in a tailored way that would ⁓ interact with the software-defined radio to then send commands to the end-of-train device is what it’s called. So they could potentially…

send a control for the train to break, to apply the brakes. So this has all been articulated in the reporting. And again, 20 years. it’s like, would I point a finger and laugh at anyone over this? Absolutely not. Would I love to dig into this more and understand how do we get better at doing preventative cybersecurity without all the bells and whistles of the marketing and the completely unregulated vendor landscape that’s really difficult to…

David (19:33)
Mm-hmm. Yeah.

Danielle Jablanski (19:51)
dig into from an asset owner perspective? Yeah, how can we get better at that and help validate that some of the tools out there are gonna actually meet the expectations and the maturity level of these owners and operators going forward to really address their risk instead of buy themselves out of risk or buy down risk, right? Which is another approach to it, but it’s a different one, think, in OT.

David (20:13)
one of the difficulties, and I really like this example, it’s an OT example, many people don’t think about terrain protocols, but it definitely is. But the challenge I see here is, suppose you have an existing rail network in this case, or maybe you’re a water company or whatever, where do you actually start? So what is

in this journey of discovering, I using ⁓ protocols which have known vulnerabilities? ⁓ It’s a bit, are my networks perfectly segmented? You already mentioned maybe you shouldn’t start with a 150 page pen test. ⁓ So it’s a bit like ⁓ picking an adventure, picking a path.

Danielle Jablanski (21:11)
Yeah.

David (21:11)
⁓ But which part do you pick first?

Danielle Jablanski (21:16)
Yeah, so again, that priority setting from like a risk management leadership perspective is difficult. It’s also difficult because sometimes we break down cybersecurity into Greenfield and Brownfield deployments, right? Brownfield being legacy infrastructure, which kind of layered with some of these modern systems. And then Greenfield, we talk about as if it’s all brand new. The truth is always kind of somewhere in between really. ⁓ I always kind of go in with a, you

policy review, essentially, like, do you have any policies that address OT? Do you not? So typically, they’ll have an IT incident response plan or something. They’ll have something, right? Everybody has something that they can start with from a maturity perspective. And then there’s always this crawl, walk, run model. And that model looks different for everyone, typically, because it’s kind of a choose your own adventure landscape out there, with the exception of NERC CIP standards in the US and some other ⁓ different types of standards and methodologies that are somewhat widely adopted. But I actually think that there’s a

David (22:01)
Yeah.

Danielle Jablanski (22:08)
over an assumption of how much like 62443 has permeated the US. ⁓ In my, you know, individual research and outreach to acid owners over the last couple years, I have not seen 62443 referenced as much as I think the authors and the proponents of 62443, which I count myself among them, would like to think back to that breakdown of small to medium businesses and the large. I’ve seen really large manufacturers that don’t touch 62443 and don’t have to. So why would they?

⁓ But that crawl walk run model looks different for everybody. But back to the comment I made about effects-based ⁓ understanding, this crown jewel analysis that kind of starts out, even with or without Asset inventory, I like to tie that back to scenario planning. And so the NIST 800-82, the revision three that came out the last year or two, ⁓ it has six different incident types for OT systems. And they’re… ⁓

malleable. So you can go in and say, okay, these are the six different incident types that I can think of. If I can do a crown jewel analysis and tear out my critical systems and functions, you know, what’s mission critical? What do those mission critical systems depend on as an interdependence? And what are kind of the third tier? They’re not super important for the OT or super important to the business, right? What could we live without? And so once you break that down and you apply that model to some of the scenario planning, you actually can go through and map. Well, in this scenario, that’s kind of generic, like

⁓ you know maybe a malicious instruction passed to one of my operators that I can’t verify the integrity of right could that happen how could that happen ⁓ I can actually tie that back to which critical systems would be impacted and or offline and then tie that back to some of the analysis I’ve done for business impact analysis and meantime to recovery and how much that might cost my company in the long run and start to say wait a minute

you know, here are all these critical systems that could be impacted by these scenarios in these ways. And I might not know each vulnerability that should be targeted to get to those systems, but I know where to start. And so I think the issue that has been existing for a couple of years is people want to buy these tools and automate a lot of the analysis for their systems. And that gives you a lot of data, right? These tools are good tools, but what do you do with that data if you haven’t built the scaffolding to go and target and attack your plans? And so that’s kind of where I come in as a consultant.

David (24:12)
Yeah.

Yeah, I am.

.

Danielle Jablanski (24:24)
and try to lead and leverage that. Do I want to do it from a green field perspective? Almost always yes, but there’s a lot of brownfield out there to target as well.

David (24:33)
And what would be the if you start with a plan, you start with the risk assessment, you hopefully have a policy or not. But if not, go for one. ⁓ What would be the most important ⁓ first good versus bad deliverables? Like if what are for you? I don’t know. Green flags and red flags. If you see something in a in a document or with a customer.

Danielle Jablanski (24:59)
Yeah,

so we typically start with a needs assessment, which reviews all those policies, procedures, and what the deliverables are. As a firm, we get to set the scope and those specs for a lot of our engagements. ⁓ As a red flag, I see people rush to procure tools. ⁓ I like to do, people call this different things, they call it a needs assessment or a tool rationalization. And sometimes that means looking at third party tools and doing some vendor scoping. I’m happy to do that. But before that, I actually like to look at

David (25:20)
Mm-hmm.

Danielle Jablanski (25:27)
current capabilities and current tooling and see if you can utilize that to better emphasis. So a lot of times people have different network monitoring solutions and they don’t have a NOC, a network operations center, but they want to go and procure a security tool. And sometimes those security tools will do things that their current monitoring already does. And so do you need ⁓ another solution that does something similar or do you need a SIEM solution? Do you need something else that might actually maximize the utility of your existing data?

David (25:45)
Yeah.

Danielle Jablanski (25:56)
⁓ And those things. And so then you get into conversations around, well, this business department owns that. And so it doesn’t flow here or we don’t have visibility into this. And that’s business visibility, not network visibility. So you have that kind cultural red flag, which is, okay, wait a minute, let’s drill down into why or why not, you know, this part is not part of your security, your centralized policy, right? So back to some of those things. And then the other red flag I’ll see, which is something culturally we’re really trying to get around at STV ⁓ is

you know, it’ll be written into policies. Our integrator is responsible for this.

David (26:32)
Yeah.

Danielle Jablanski (26:32)
And it’s like,

that’s great. I’m glad that you have retainers and relationships and trusted advisors out there in the world. But when push comes to shove, you can’t test that resource unless you’ve built a mechanism to test that resource. And so I’ve seen ⁓ different sectors that will have actually ⁓ an integrator or an engineering firm that’s listed as their cybersecurity incident response retainer. And so I’ll reach out to that provider and ask, what is your cybersecurity program look like?

David (26:42)
No.

Mm-hmm.

Danielle Jablanski (27:01)
and they either don’t have one or it’s in the works.

David (27:04)
Yeah, and in the end, it’s still the operator who faces all the risks.

Danielle Jablanski (27:09)
Exactly.

Well, yeah. And then they just kind of, we joke that, you know, the operator will blame the integrator, the integrator will blame the vendor or the vendor will blame that, you know, because once they, you know, deploy it, they don’t configure it. And so they, the liability doesn’t always extend to the end use. so again, I don’t want to point fingers or laugh at anyone. ⁓ I don’t like this merry-go-round and I want to get off of it. And so as an engineering firm, we’re looking at how do we, to the best of our ability, build this in from the beginning and

helped to raise that aptitude. I say aptitude because awareness doesn’t do enough. It’s aptitude. You have to build the capacity. so a couple of things I’m really kind of passionate about right now is, again, to the pen testing and IR space, that’s great. Those should be goals and deliverables. But what forensic capacity do you have for an instant response today? What logs are you keeping? What is your policy on retention? Those are the things you can build without spending too much money on a tool and depleting your resources. ⁓

David (27:57)
Yeah.

Danielle Jablanski (28:05)
Same with pen testing. What types of attack pattern analysis can you do with the understanding of your systems and their criticality today, where you can then plan for the right execution of something that’s going to be useful for your team? And I love the analogy of the rising tide raises all boats. Well, all those boats could be everyone in your business department. It could be everyone in your organization. It could be everyone in your city. It could be everyone in your sector. Depending on how we do this.

There’s a lot of boats out there. It’s not just a security thing, right? It’s not just a peer to peer thing. is a, everybody should be interested in ways that we can standardize this understanding and the promotion of ⁓ these aptitudes to have more safe environments, right? That’s a shared goal.

David (28:54)
And in general, so there is is being prepared for an incident, or protect against, protect your operations against possible incidents. It’s being also about being prepared if if something happens with what do I do when can I say can I say shit hit the fan? Shit hits the fan in this podcast is my podcast, can say whatever I want.

Danielle Jablanski (29:20)
Yeah, yeah. It is. You

can do whatever you want.

David (29:23)

So, this preparedness in my, what I’ve seen in the past is that most companies while they might indeed have a policy or they might procure some tools, cetera, et cetera, et cetera, but then when you go to being prepared to actually act upon an incident, that typically is even less available or, yeah.

less common. What’s your take? How do you see this from small to large companies?

Danielle Jablanski (29:56)
Yeah, so sometimes it comes down to just exercising plans, like a lot of plans sit on shelves and they don’t get exercise. I think we’ve gotten better at that. People do lot of exercises. I actually heard somebody in the industry a couple of years say, I don’t care about what scenario you plan for at a tabletop. I just care that you do one. I didn’t love that. I think it really matters. And something you I have talked about is because of fault tolerance.

David (29:59)
Hmm.

Mm-hmm.

Mm-hmm.

Danielle Jablanski (30:17)
And

so I get really into fault tolerance because system design is where you can prevent system failures. And we need to be able to understand how cybersecurity incidents can cause system failures, right? That’s the name of the game. But we also need to be able to understand what is impossible, what is not plausible. And you can’t do that without really getting to that root cause analysis either. so, and I’m not an expert on every protocol and every sector and every technology out there, right? I can’t tell you how to get from the CCTV to the, the,

David (30:36)
Mm-hmm.

Danielle Jablanski (30:45)
vehicle protocol to the rail protocol. Right. I’m not a wizard. ⁓ But I do know that people know these things. And so for a long time, you know, people would kind of assume some Hollywood scenarios. I used to point out my sans talk was all about Hollywood scenarios a couple of years ago. And some of them are relevant and some of them are not. And so like war games years ago, I believe it was Reagan asked, like, can that happen? ⁓ When it was a movie and now you have the Netflix movie with the

beaconing of the Tesla’s to one location. And we’ve seen that with some of the robotaxi services. So it’s like some of it is relevant, then I was years and years ago, I was doing a tabletop with a city and they were talking about how they were worried that somebody could hack all of their traffic light systems and everything could be green at the same time in the same ways. And those systems actually can’t do that. Their logic won’t allow it. But that doesn’t mean that they’re safe. That just means that that one scenario isn’t the most plausible for you to plan for.

David (31:34)
Yeah, yeah.

Danielle Jablanski (31:41)
Another example I like to work with is ⁓ hospitals and airports. There’s a couple different systems that people like to focus on. They always forget the parking meters. So parking meters have become really digital. are both a huge revenue driver, and they also have a traffic flow dynamic to them. So if you could target parking meters in a specific way, ⁓ you might cause some other issues, depending on if it’s a parking ⁓ structure, like a building, a parking ⁓

garage or otherwise, I mean, you could really cause some significant delays or mishaps, or you could really ⁓ see a lot of intrusion from connecting things like that to other resources like in a city. And so that was one that I proposed to them as a scenario to look at because they had outsourced their parking meters actually to a third party company. And the city actually knew about it because they were upset that the revenue did not go to the city anymore, right? It went to paying for kind of like the subscription model, right? Over time.

So in order to get the installation up front, they didn’t want to pay for it up front. so the revenue from the parking meters actually for a certain amount of time had to go to that company instead of the city. Interesting, right? And that procurement OPEX versus CapEx conversation too. Yeah.

David (32:49)
Yeah,

no, ⁓ Is that talk available on YouTube or another platform? Yeah?

Danielle Jablanski (32:55)
The Hollywood Scenarios one?

Yes, yeah, it’s out there. It’s one of the Sands ones where they do the really cool live capture with the cartoon ⁓ sketch behind it. Yeah, yeah.

David (33:03)
⁓ cool.

I have to look that up and link it in the show notes. will do that. We could do a deep dive in so many topics, but I have this feeling that we should probably on our blog and on our podcast do more series around cybersecurity. We’ve done now one on data platforms. We will be doing one on manufacturing AI.

Danielle Jablanski (33:09)
That’d be great, yeah.

David (33:32)
second half of this year and I think next one should be about cyber security. I’ve made the decision right here right right here right now.

Danielle Jablanski (33:38)
Good, and I’ve got a list of folks for you too, friends

of mine that are brilliant and total experts in different spaces that I can put you in touch with.

David (33:46)
That would be super interesting.

But I want to end this one with one final question. For people who are watching or listening to these episodes and they go like, this sounds super interesting. I want to get into cybersecurity ⁓ myself, or in this case, want to get into OT cybersecurity. So what advice would you give them? Like, where do you start? Are there?

educational programs? it more about finding a junior position somewhere? I don’t know. Well, what would be your advice right now?

Danielle Jablanski (34:22)
There’s a lot of ⁓

There’s a lot of things that cost a lot of money and that’s unfortunate. So this, I’m trying to find a couple of resources quickly. There are a lot of things online that you can find for free that are not these robust, like 18 week boot camps for cybersecurity. So it really depends on where you’re coming from. So I’ve heard this, not argument, but I’ve heard people say, oh, I can take a network engineer and teach some OT or I can take a real engineer, not, you know,

David (34:29)
Yeah.

Mm-hmm.

Danielle Jablanski (34:58)
physical engineer, whatever you want to call it, and make them and I, you know, train them on the networks and the systems and the protocols. And it’s like, no, you can take a person and you can train them based on their interest and their aptitude, period. You know, I heard somebody recently say like, it’s easier and I’ve always had more success doing one or the other. And I’m like, you’re just talking about people and experiences, right? That’s it. We’re just humans. And so like, you had a bad experience maybe with a human who didn’t want to or invest in learning one side or the other, but I don’t think anyone’s better at either.

David (35:19)
Yeah.

Danielle Jablanski (35:28)
Right. there’s a lot of OT content out there that is really just networking on steroids. And then there’s a lot of really, really good, you know, ICS specific. Do you understand these controls? Do you understand really logic that was borrowed from electric relays and put into the ICS components that allowed for distributed communications, those kinds of things that we can break down easy. I know that’s not a great answer, but, ⁓ there’s this, there’s this like push out there to say like, do one or the other, do whatever you’re interested in and do it.

as much as your resources allow for in your time, your aptitude and your commitment. So some people say go out and buy a PLC on eBay or buy one of the trainer kits. Those are kind of expensive. But if you’re a hands on person and you know that you’re going to spend your free time picking up something in your hands and messing with it, you know, like many of us did years ago with the lockpick kits when we started, you know, exactly. I still have like, so if you’re going to pick up and do that, if you know that you’ll buy that thing, it’ll sit under your desk. I have three under my desk and you won’t touch it for a year. Then don’t do that.

David (35:57)
Mm-hmm.

Yeah.

Yes.

Danielle Jablanski (36:23)
If you think of you’re somebody who learns more visually, then go to YouTube. YouTube is a wealth of information on automation, instrumentation, historians, right? We didn’t even talk about specific systems and data analysis. There’s really good free training out there. I want to find one because there’s ⁓ one I use in my course. Let me see if I can find it.

David (36:44)
But while you’re really looking at that, I personally also think that it’s a team effort. So you need…

Danielle Jablanski (36:52)
I found it.

David (36:55)
cybersecurity expert, sorry, OT, sorry, you need the knowledge on OT protocols and OT systems and OT architectures. You need to understand things like fault tolerance and all that type of things, which is super important. On the other hand, you also need the people who really come from a networking perspective, from an OS perspective, yeah, so, and they have to, it has to come together somehow.

Danielle Jablanski (37:16)
Right.

Right. You do need both, but let’s say you’re an engineer and you’re interested in the sensors and different data collection as an extension. And again, that relay logic and how that all comes together, because maybe you’re not a controls engineer, but you know that systems of systems make sense. You’ll start there. Maybe you are more interested in the actual data and you might start to look into, how are they building the network analysis tools that are doing the continuous monitoring in those solutions space? So you might go more that architect side.

David (37:34)
Yeah.

Danielle Jablanski (37:48)
⁓ the resource I was thinking of it’s, it’s, ⁓ learn.automationcommunity.com. They have a ton of free resources. We never really talked about them because they’re more in the instrumentation and automation side, but those are the components, the logic and the programming that we’re talking about when we’re saying you can target different systems to access and change things like set points, variables, tag values, right? None of that’s going to make sense to you.

David (37:59)
Mm-hmm.

Danielle Jablanski (38:11)
If you haven’t looked into what that means in the automation world. So there’s this kind of like hybrid approach to it that I think it’s, it’s not great that it’s a choose your own adventure. There’s no one certification out there. It’s going to, you know, your biggest bang for your buck, but you know, you know what you’re interested in. so, um, again, YouTube automation community, free stuff. It’s out there. Um, you know, spend a little time looking at it and I think you’ll find a really unique.

David (38:12)
Yeah.

Danielle Jablanski (38:35)
world that you didn’t know existed. The other book I always plug, which he’s never asked me to do this, and I don’t even know if he knows that I do it is the ⁓ Grady Hillhouse book. So it’s called Engineering in Plain Sight. We can’t really see it here, but we can link it. ⁓ So he has a YouTube channel that he has promoted for a long time. He does a really, really beautiful job of just describing the world around us in terms of engineering. ⁓ I’ve joked with some friends that like had I known about

David (38:49)
Yeah, yeah, I see it.

Danielle Jablanski (39:01)
the world when I was younger, I would actually get a degree in material science. I think it’s incredibly interesting. ⁓ But so this has nothing to do with cybersecurity. It just describes an illustrated field guide to the constructed environment. And so it has the electrical grid, communications, roadways, bridges and tunnels, railways, dams, levees, and coastal structures, municipal water and wastewater and construction. And so the reason I love this book is because the more sections I read in it, I start to notice

David (39:06)
⁓ yeah.

Danielle Jablanski (39:29)
around me in the world when I’m driving to pick up my daughter or going on a trip. know how much infrastructure around us is truly cyber physical and digitized today. And we’re only obviously getting more and more into the future. So it’s crazy.

David (39:30)
Yeah!

Yeah,

I also like that a lot. I do the same with my kids and I, or actually also with my wife that I go like, you see this? So that actually works like that. And they have a control system who does this and then, and then after a couple of words, it typically goes like, yeah, yeah, yeah. But anyways, I’ve tried it.

Danielle Jablanski (40:01)
Yeah, well, and you can do the OSINT. So for one of my courses,

there’s a midterm ⁓ and there’s actually photos on there of a manufacturing plant where we live that you can see their skater racks or the other skater racks. You can see their PLC racks in their Google images. Yeah.

David (40:13)
cool, that’s in their Google images. ⁓ wow. Yeah.

That’s also something we should, I could actually also advise to the listeners. So these are open source intelligence is you can find a lot of very, very interesting things out there. So yeah, absolutely. All right, so I will keep true to my promise. We’re gonna do a cybersecurity series later this year. Still need to plan that, but.

Danielle Jablanski (40:27)
There is.

Yeah.

David (40:43)
We have to do that. think it’s too important and it’s also perfectly aligned with the mission of the IT/OT insider. Danielle, thank you so much for sharing your insights.

Danielle Jablanski (40:54)
Thank you, David. I’ll also plug the Copenhagen Industrial Cybersecurity Event for folks across the pond, as they say. I know that’s a great one. it offers both really in-depth and kind of technical nuance, but also a lot of beginner as well. They do a really good job of kind of gauging the audience and making sure everyone’s invited and inclusive. So that’s the one I’ll plug for your side of the world.

David (41:19)
Perfect, I’m gonna also add a link to that event in the show notes. How can people reach out to you, Danielle?

Danielle Jablanski (41:27)
I’m on LinkedIn. think I’m the only Danielle Jablanski. If we have no one in common, it’ll pop up as Danielle J. But yeah, happy to be connected, happy to talk, and happy to be here for the next 20 years doing exactly this.

David (41:31)
Yeah.

I’ll hold you to that promise. All right. Okay, perfect. ⁓ Thanks again and also to our listeners for tuning in. you enjoyed the conversation, always, ⁓ go to make sure to subscribe via itotinsider.com if you’re not yet. Or also take look to itot.academy. yeah, see you next time for more insights on bridging IT and…

Danielle Jablanski (41:42)
Yeah, I’ll see you then. I’ll see you then in 20 more years.

David (42:06)
OT and until then, take care, bye bye.