The Defuse Podcast: Beyond Security - The Science of Feeling Safer

OSINT Part 2 - The Digital Lens Into Personal Security with Jon BlakeUntitled Episode

• Philip Grindell MSc CSyP

Send us a text

OSINT Part 2 - The Digital Lens Into Personal Security with Jon Blake


In the second part, Philip and Jon delve deeper into technical aspects of OSINT and digital security:
- VPNs (Virtual Private Networks): They discuss how VPNs create secure, encrypted connections through virtual "tunnels" and their benefits for security, particularly when using public Wi-Fi. They also address limitations, including potential impacts on speed and some websites detecting and blocking VPN access.
- The security risks of public Wi-Fi networks, with Jon recommending always using a VPN when connecting to hotel, airport, or other public networks.
- Using OSINT for due diligence, including understanding a person's "digital footprint" (information they know about and control) versus their "digital shadow" (information they may not be aware of, like data breaches).
- Techniques for conducting effective searches, including using Google Advanced Search rather than basic searches, and applying filters strategically.
- The challenges of images online, including metadata that can reveal sensitive information like location data, and the use of reverse image search technology.
- The emerging role of AI and automation in OSINT investigations, with Jon
cautioning that while AI tools can be helpful, they need human verification and should be used carefully.
- Deep fakes and their security implications, particularly for high-profile individuals and organizations, with Jon emphasizing the need for verification processes and code words as potential safeguards.


The podcast concludes with Jon highlighting his company's training programs and customizable "cyber investigator pathway" courses available through Cyber Ops Global.

Support the show

Subscribe to 'Defuse News', our weekly update of the week's events on our website.

Follow me on X /Twitter

Connect with me on LinkedIn


Speaker 1:

Welcome to the Diffuse podcast with host Philip Grindell, CEO and founder of Diffuse, a global threat and intelligence consultancy that blends psychology and intelligence to mitigate threats and risks to prominent people and brands impact series focusing on personal threat management, protective intelligence and crisis response.

Speaker 2:

It's designed for high profile individuals, corporate leaders and security professionals, and each episode delivers expert insights on navigating real world threats, both digital and physical. So this is part two of our conversation with John Blake from CyberOps Global, a former colleague of mine, and it's on OSINT the digital lens into threats with John. So, john, we've obviously had part one. For those people who may, for some reason, have not heard part one, can you just kind of give us a introduction into your kind of background, your career, career and what you're doing now?

Speaker 3:

yeah, so um, thanks for having me again, phil. So yeah, ex um uh operational detective uh, in london um went through the usual processes of becoming a detective and then moved into um, into the sort of cyberspace looking at at how criminals are using technology to commit crime and how we could sort of counter that, really Ended up in a national coordinator's job for a few years back in 2011 to 15, and then left the police after 26 years and stepped into the private sector, where I've worked in cyber investigation, consultancy and training.

Speaker 2:

Brilliant John, and John is somebody that we use at Diffuse Global for some of our OSINT investigations and also sort of cyber advice around the subject. So, part one we talked about the kind of basics around what is osint, what is intelligence actually? Um, we talked about some of the myths and some of the realities and some of the terminology around surface web, deep and dark, and um and triple I, which is a new policing term, um, and we talked about ip and at the end of part one, we you referenced the term which I think is worth starting here, which is vpn. Yeah and so um, let's talk about that, because I I use vpns, but there are times I turn it off because it becomes really slow and almost painful at times. So what is a vpn and what's it designed to do?

Speaker 3:

I mean, in simple terms of a vpn which stands for virtual private network, is a VPN, and what's it designed to do? I mean in simple terms. A VPN, which stands for virtual private network, is a way of obtaining a secure, encrypted connection from a device to another location on the Internet. So if you think about running a really long cable from your house to wherever you are in the world cable from your house to wherever you are in the world it would be very difficult for anyone to tap into that cable and get the information that's being passed. And what a VPN effectively does is it creates a virtual cable, which we call a tunnel, which is encrypted. So the data is encrypted on your device and it's sent over to wherever it's going to be received. So that would be the sort of definition of a corporate VPN. So a lot of people will be familiar of remote working. You switch your VPN on, you connect remotely via your Wi-Fi and you plug into your office or your company systems. You're onto the perimeter of their network. Everything that you do from that point will be routed via your office. So you're if you're in. You know you've got a uk business. You're in um africa somewhere doing some work. You connect, everything will go out as if you are in the in the uk for your business. Uh, so that's the technology, um, but what? What the consumer vpn does is, rather than being pre-configured to connect to a specific place, you can choose where in the world you want to route your traffic through and then you'll get the footprint provided by that provider. So I mean, from a security perspective, vpns are great, and actually National Cyber Security Center, which is the public sort of facing part of GCHQ, really they recommend that we use VPNs when we're away from our home networks.

Speaker 3:

But the downsides, of course, are that sometimes the VPN services will log our information, so they might log our true IP address. So that could be useful in an investigation. Speed is another issue. If you're using um, a certain type of vpn, um, they may throttle down the, the speed for streaming, for example, and it gets all buffery um, then you've got um. You know the paid for versus three um vpns. Uh, if you're paying for something, you would expect a decent speed. If you're using a free vpn, know, not. So what are they doing with your information? That's the question I would be asking. But again, you know, vpns do cause us problems in investigations, but it's not insurmountable. It depends on what type of VPN and what the motivation of the person using it is.

Speaker 2:

And one of the other things I always find when using VPNs sometimes or not very often is it restricts access to certain sites. So sometimes it'll say to me you're not in the UK or it positions you so it doesn't appear that you're in the UK, so you can't access these particular sites.

Speaker 3:

Yeah, I mean. A classic example of that is online streaming services. So you might think, actually I'm in the UK, I'm going to use a VPN and try and get onto the US one, and then you'll get this thing saying you're using a proxy service, please disconnect that and reconnect, because these services are region-based. Even though we might think we're paying, we're paying for the service in our region and I'm not going to name the popular one, but we all know, I think, which one it is. But the point is is that they are using technology to detect the VPN connection. So they will say look, we think you're on a proxy service. Restart your internet connection, don't use a proxy service, and then you'll get what you pay for. So I think you know there are technologies out there that will detect it.

Speaker 3:

But from an investigator's point of view, you know, we know that people use VPNs to manage their footprints. We know that people use VPNs to access material that isn't available in a certain country. And if you're in a country that is oppressive, then you can use a VPN to get hold of material that you might be getting blocked at ISP level. So you know, I would say that you know a VPN is not an indicator of any criminality or, you know, subversiveness. It's just the technology that we use to keep ourselves safe.

Speaker 2:

So is it something you would recommend that clients use, then, so that they can, I suppose, distance themselves from their home address or from their hometown, or?

Speaker 3:

that sort of thing 100, I mean even. You know, forget about the geographical side of it for a minute. If you're in a hotel, you're connecting to a hotel router, they can. They can see your traffic, they can see what you're connecting to and why and how long. Um so if you use a vpn, that will take you out of their network into an independent network that probably isn't interested in what you're looking at and it keeps you safe. And I would recommend that. If you are in, you know, public using public internet connections, public Wi-Fi, airports, hotels, railway stations you know I would always use a VPN. I highly recommend it, just for safety. And the VPN that I use has got a smart. It's got a smart connect button. So you just click smart connect and it connects you to the nearest, fastest connection. You know we're not talking about trying to route through a different country. All we want is a secure tunnel out of where we are would you what's, from a security perspective, logging on to public wi-fi?

Speaker 2:

can we talk about that for a minute?

Speaker 3:

yeah, of course. I course. I mean, look, you know, everyone needs Wi-Fi, everyone needs connectivity. I'm personally not a great fan of hotel Wi-Fi. I try not to use it only because I don't really trust what they're going to do with the data and quite often it's a rubbish connection anyway. So I try and use sort of 4G and 5G where I can. But yeah, I mean, look, does it matter that the hotel can see that we're watching bbc iplayer? Not really, I mean, it depends on what you're doing. It depends on what type of information you're passing over the internet. If you're logging into banking, if you're logging into secure services, email, that type of stuff, I would definitely think about using a vpn. And, by the way, the threat isn't just from the provider. So we're going to use the hotel as an example. You don't know who else is on that network, yeah, you know. So you've got to be aware of that. It's not necessarily the hotel or the provider that's the threat. It's other people on that network.

Speaker 2:

So you might have an adversary who is targeting you, who is following you into that particular venue hotel, a coffee shop or whatever and is logging on to the public wi-fi in attempt to actually get into your system, to get into what you're doing or certainly monitor your traffic.

Speaker 3:

Yeah, absolutely yeah, and it's become more difficult over the years. I mean, we used to do a demo on our courses using a piece of software that's freely available. That now doesn't work because security is being put in place. But you know there are some very clever people out there and you've only got to look at some of the demonstrations from. You know the security conferences like Black Hat. You know you'll see some really, really bright people that are developing software all the time to counter the new sort of security. But I think the thing about cybersecurity, phil, is just use common sense.

Speaker 2:

So, talking of common sense, let's go on to things like communicating on WhatsApp. Then, as we've seen recently, and Signal and these other things, we were always told, don't we years ago, that WhatsApp was encrypted and safe, etc. That's clearly not the case, or not not particularly. I mean it's encrypted, but not massively safe. So can we talk about these public networks in terms of from an osim perspective and from a personal security perspective?

Speaker 3:

well, certainly from a personal security perspective. I mean whatsapp's fine, it's um, you know it's uh's. Some of the people that I work with don't you know they're sort of. I call them. You know they suffer from cyber paranoia. You know everything's. You know everything's dangerous, but from a day-to-day use, whatsapp's encrypted end-to-end. So when you send that message until it's received, no one's going to be able to read it, even meta. But the danger comes is if someone gets hold of your device because the messages are held in the clear on your device.

Speaker 3:

Another place that we see um high risk with with uh, whatsapp and other messaging apps as well, is when they use the app on their, on their, their laptop. So what they'll do is they will, um, they'll, they'll sync their phone to there and you'll see them. You see them quite often. You. They'll be busy right away and in the corner will be a little WhatsApp box and they'll be responding to messages. There's definitely a trace there on the computer. So it's all about. You know where is the information that we're leaving behind. But I think you know again, it's about the security of the type of information that you're passing. So if it's day to day, chday chit chat, you know whatsapp's going to be absolutely fine, but the problem comes with the security of the actual device. There are some applications that will encrypt messages on the device, which makes our job from a digital forensics perspective a lot more difficult. Um, but look, um, again, it really does come back to to common sense. Um, should we be passing that level of sensitive information over the public networks?

Speaker 2:

and I guess you know the point was made with the. You know that we're talking about the recent us defense conversation on signal as an example. Using signal is not necessarily an issue. It's about a what are you discussing? And b do you know who else is on there?

Speaker 3:

and your behavior? Yeah, absolutely, and that was a group rather than a one-to-one chat. I mean groups I'm always cautious of and we talked about on the last podcast about. You know, sometimes part of our job is to infiltrate those, those groups, to find out what's being talked about, particularly around you know, um, where a client is worried that their product might be for sale or their information or whatever. So you know, yeah, once you get into groups, it's a different, it's a different environment and you need to be a lot more careful about how you conduct yourself and what you do. Certainly, from an open source perspective or an investigation perspective, we would be. We would be looking to identify what groups people are in and then think about how we can best leverage that information, whether it's just looking at it the fact that they're in a group might be enough or the fact that we might want to actually have a look at what's being talked about.

Speaker 2:

Can we talk about OSINT from In terms of doing things like due diligence? Then, yeah, you know we've talked to you and I about previously around the subject of what people call due diligence or, you know, pre-workplace security or whatever, screening, screening all that sort of stuff. Can we talk about the benefits and the downsides of using OSINT for that?

Speaker 3:

Yeah, okay, so the first thing about when we talk about, you know, open, open source, what what we're actually talking about is publicly available information. Ok, so if something's publicly available, the person may know about it. They may not. What's quite interesting is we talk about something called the digital footprint, and the digital footprint is often, you know, used as a, as a sort of term that you know it's in my digital footprint is often used as a sort of term that, oh, it's in my digital footprint. But actually there is another part to your digital footprint that is less known. So your digital footprint is the stuff that you know about, the stuff that you know is out there, the stuff that you agree to, the stuff that you can control. But if you think about the stuff that we don't know about, things like what we call the digital shadow, so the digital shadow is stuff that we don't know about, things like what we call the digital shadow, so the digital shadow is stuff that we don't know about. So that could include things like data breaches, so where your information has been accessed by third parties and then published on the internet. We might not be aware of that. So that might include our name, our date of birth, our address, our phone number. So that's a really good source of information for us and it's not intelligence at that point, it's just information until we verify it. But we use data breaches a lot to try and find out stuff around persons of interest and also in due diligence. So when we're doing, you know, is there anything out there? And also security as well. You know, if you've come to us with a few clients who want to know what's out there about them, they want a snapshot and that's the place that we would look. So I would say that you know, using open source techniques for due diligence is really important. It gives you a picture of a person's online life and you will uncover things that they either don't know about or they've forgotten about.

Speaker 3:

Now. That might be positive. It could be negative. You know, we've all got. We've all got history. We've all got stuff online. Again, youngsters are having this problem. You know, when they're kids, they're posting stuff online. Might be inappropriate, kids they're posting stuff online. Might be inappropriate, might be a few years ago, um, forgotten about it, and then it comes up in um in a university interview, or it comes up in a job interview or an internship, and so what's this? You know well. Why have you got these views? Do you still hold these views? You know so, it's not always the end of the world for people, but it can be damaging. So we use it a lot. We use it a lot. We use it a lot.

Speaker 2:

Yeah, I mean it's interesting. You talked about data breach, et cetera, and we have a product which you contribute to this product, which is our privacy package, where we are for clients, analysing on a monthly basis what's out there about them and then removing or changing or adapting information. I mean, one of the other things that always strikes me as interesting is when people say you won't find anything because I'm not on social media, et cetera, and of course, what they forget is no other people are.

Speaker 3:

Yeah, correct. And you know, one of the vettings we would do for covert officers in the police was you know what is out there about their police police working relationship. And I remember one girl she was. She was, she was a really good operator and she was really fit for for the, for the job, um, and then we did some social media work around her and she was in the in the police netball association and she was a really good quality player and there's photographs all over facebook now, unfortunately for her, she was very distinctive in appearance.

Speaker 3:

That was the end of her covert career because we couldn't you can't get that stuff removed. You don't know who shared it. I mean, we're going back four or five years. It's not like a post that was made last month when we could say to the author get rid of that please. There was too much of a trace and she was devastated. But there was too much of a trace and she was devastated. But there was no way that the risk to her was not acceptable. So that's a really good example of, you know, stuff we forget about.

Speaker 2:

Can I pick up on a point you made there? So had she deleted, had it been a couple of months ago and she deleted it? That doesn't necessarily mean it's gone, though, does it?

Speaker 3:

No, it doesn't, but it would make it a lot more difficult to find. So, um, we'd have to. You know, it's everything's risk-based, isn't it Everything's risk-based? And you know, uh, when we, when we had that particular case, it went up to a senior officer to make the decision and she made the decision that, um, you know, we couldn't take the risk because there was too much stuff, it was too much of it. Um, but, yeah, I mean everything's risk-based. We could say, look, delete it and we'll take a chance If it's been shared. How many times has it been shared? We could have a look at that. Look, I mean, you're never going to be 100% sure, but in that case it was just a no.

Speaker 2:

So let's talk about some different techniques, because I think it's getting to the kind of technical bit of it for you, the stuff that you teach people to do and, um, you know where do you start okay.

Speaker 3:

So everyone knows how to search google, um, but? But when you do a google search, actually what you're doing is you're going back all the way to the start of google and you're also going looking at everything. You're looking at urls, web pages, you're looking at pdfs, documents that are out there all the way back to the start. So the first thing we teach people is how to use google adbans search, how to, how to manipulate those searches to look for the stuff that we want to find. So google's own rules on there are look, do a search, see how many results you get and then use our filters to filter stuff down.

Speaker 3:

And some of you that some of the listeners will will remember something from school called boolean, and boolean searching was ands and noughts and ors and this sort of stuff. But google's got all that stuff built in and it's really user-friendly once you know how to use it. So you can go from, you know, 30 million results to 250 in a very short period of time by putting date filters in, by looking for certain file types for um, looking for exact searches rather than hand searches, you know. So we spend a lot of time on that and what I find is that a lot of the, the youngsters that have recently come through university, they know all about this stuff because they've used it in their dissertations. But the likes of you and I, um, who will remember, you know, the days before the internet, um, this is a revelation to people because they don't realize that all those tools exist. So, you know, it's always looking for that needle in a haystack, isn't it? But? But how do you find it? How do you? How do you filter down?

Speaker 2:

so you saying we you don't use boolean searches anymore?

Speaker 3:

you can do it, but you don't need to because it's built into google and it's built into other search engines as well. But it's understanding what they do really, um, without using the term boolean, because it sort of puts people off because it's mathematical, but I mean, um, yeah, there's features within search engines that will allow you to do this stuff really easily, but, but I think that one of the warnings there is, you know, is you filter down in incrementally. Yeah, because you'll miss stuff.

Speaker 2:

If you put too many filters in, you'll miss stuff yeah yeah, so that's a big part of what we teach initially on the foundation training and so are there particular tools or particular um software or or or stuff, if you want to use a nice technical term that you kind of introduce people to recommend and use?

Speaker 3:

yeah, uh, yeah, so if you google google advanced search, it'll take you to the advanced search page, which is a series of boxes. I mean, some people know it exists, some don't. If you're on Google and you go into the bottom right and you click settings, you'll see advanced search there. There's a couple of ways you can find it, but what you're looking for is the Google advanced search form, and it's pretty self-explanatory. You know. It's any of these words, all of these words, none of these words. And then, once you get your results, you can start to filter down by date.

Speaker 3:

But we spend a bit of time on that because it is important that people understand filtering. Now, what's important as well is that we keep an audit trail of what we do, because searching, particularly around the type of work we do for you is a snapshot in time, and we know, you know, we've got clients that we provide monthly reports on and we're not interested in what happened, you know, more than a month ago. We're happening, we're interested in stuff now, so, but, but what we are also interested in is stuff that might have been backdated.

Speaker 2:

Trying not to miss information, but but not being not being overly um, overly um complex in the way that you search. But I mean, do you introduce them to different sort of platforms or softwares or apps that they you think? Yeah, we use these yeah, we do.

Speaker 3:

We do in the later courses, I mean, you know, we, we. So there are some, some sort of online services. So the internet archive is a good one. Um, you know, that's that. So the internet archive, uh, again, the part of that that we use is called the way back machine, which sort of speaks for itself. It, it looks way back, um, and what it does is it, rather than, uh, indexing web pages, it takes snapshots of them. So that's quite useful, because what we can do is we can say, well, look, if we're looking at a company that's squeaky clean they haven't always been that way, and the great thing about that is, as soon as the snapshot is taken, um, that company lose control of that, of that information. So so, yeah, we do, I mean, but we don't really teach any sort of third party software because, you know, most of those require sort of subscriptions.

Speaker 2:

Yeah, yeah, so can we talk about, then? You know, a subject which is is on everyone's lips, and I'm interested to know what the role of AI and automation has in OSINT investigations.

Speaker 3:

Yeah, ok. So AI is obviously you is obviously a very popular thing at the moment and it's something that we're all really interested in. So the first thing about AI is a word of caution, and when we generate AI reports using the popular platforms, they normally come with a warning that says, look, you need to check the validity of the information in this, in this report but I think we were talking just before we started today and, and you know, I used it the other day someone asked me to produce a um, a job description for for a cyber investigator role, and I thought I haven't got one, but then I thought let's quickly do chat gpt and it was fantastic. It produced this thing, but it's exactly almost as I would. I would have written it so. So for those types of of quick sort of questions, they're useful. Uh, ai is useful um. The thing about it is you've got to ask yourself where does it get its information from? Um. So some platforms are learning as they go. So when you do, do ask a question to an AI platform generally, that might inform future answers, so that can be useful.

Speaker 3:

Some AI platforms look onto the worldwide web. Some don't. They use their own sort of databases, but it's an emerging role, is how I would put it at the moment. Certainly, one of our big clients is looking at it. I've got a meeting next week to talk about how ai can be used in investigations and my answer to that is cautiously um, but it's coming and we can't do anything about it. Um, you know, um, and why would we want to change it? Anything that helps us do our job is fantastic, but we do need to understand that it can make mistakes. It's only as good as what it knows and I think at this stage, it's a really good starting point, but we do need to verify things manually, particularly in investigations.

Speaker 2:

And what about automation?

Speaker 3:

So other tools that you're using. Yeah, so automation is great. I mean, we've been using it in in the digital forensics world for years. Um, we, um. We use e-discovery platforms.

Speaker 3:

So the one that I used and I will name it because it's fantastic is nuix. It's a very expensive product and I didn't have a license for it, but I was working for a company that did. But what that does is it ingests all the data that you've got and then you can ask it questions, and that's fantastic because that saves you a lot of work. That saves you a lot of work. But again, you know, it's only as good as the information that's in there and it's only as good as the algorithms for understanding the question that you ask. But where this has been used recently in an investigation that we did, was a large scale mail server. Millions and millions of emails were seized and put into an e-discovery platform and then we could ask questions. And they've all got an AI front end now so they understand the question that you're asking. You can ask in plain English. So automation massively important and that is coming.

Speaker 2:

But then presumably you've always got to check the information, yeah.

Speaker 3:

I mean, ultimately you've got to stand by, certainly in a criminal case or a civil court case, if you like, you've got to produce that. So you've got to stand on it and and you know and produce it. So so yeah, you need to be sure that it's right. But if it's telling you look here, look here, look here and look here, if you go there and look, you'll either find the thing you're looking for or not yeah so so yeah, automation fantastic, ai fantastic, just to just use that cautious approach so what would be then?

Speaker 2:

so, coming towards sort of the end of our second session, we could probably go on all day because it's such a big topic and such an interesting one. I think. I think we might have to do another session, john, because it is there's so many different questions and it's such a big part of what we do. Um, if you were advising high-profile clients, then, about how to up their security, how to either use or protect themselves from what's on, let's say, the internet, for want of a better term what sort of things would you be talking about?

Speaker 3:

okay. So I suppose that you know, if we get down to the basics, there are things about making sure that you're only sharing information that you're comfortable with. That's the thing I mean. We talked about digital footprint here, so this is the stuff that you know we control. So, um, privacy settings on things like facebook, you know a certain countries. Now you can actually lock your facebook profile. Unfortunately, uk isn't one of them, although I am told it's coming.

Speaker 3:

So you use the, use the security features that are built into the platforms. Um, so, for example, you know instagram has got the. You can have private accounts. Facebook, you can lock all the the information down, but by default, they're not necessarily particularly secure platforms. But we're looking at it from the perspective of an individual. Facebook will look at it from the perspective of a warrior business, and here's our tool and it's really good and you can find people and find out stuff.

Speaker 3:

But but what we're trying to do is limit that hostile sort of activity, if you like. So it's about understanding what you're sharing, who you're sharing it with, how you secure the platforms that you're using, and I think that the final thing would be, um, understanding what information you think could be used to harm you. So if a hostile person was looking to do you harm, what information would they be looking for? Now, obviously, name, address, date of birth is a given, but then there'll be other stuff around the type of work that you do, the industry that you're in, who you are, you know what you're about. So that would be identifying the high risk information which is in your world. And how do we, how do we minimize that, how can we secure it?

Speaker 2:

so can we just touch, then, on another area that we've done quite a bit of work on and I know you've helped us with this as well as is images and reverse images and how people use those to. You know, a classic one is you know, someone puts a picture of kids on their school uniform or their football kit on, or what have you, yeah, yeah, and then you can use that to identify what school they are or what teams they play. So can we just talk about images and reverse images? And the other issue that people have an issue with or probably don't even realize we touched on it previously is around, you know, other people will put pictures on that you're not aware of. So you know someone else will put a picture of the football team or the school picture on them, et cetera. How do you keep track of all that?

Speaker 3:

Yeah, so digital images are a big part of the World Wide Web. They're a big part of the internet social media. So certainly from an investigator's perspective, I mean we can search images so we can see other occurrences. That's a fairly old technology and that's something that we treat on our foundation training. So but obviously, as I said to you earlier or in the previous podcast, is that search engines can only look and record information that they can access to. So if you do an image search and facebook might, you might get some results that say, yeah, this image is on quite a few Facebook profiles. That doesn't mean that it's on. The search engines found them all, because you can only see it's only allowed access to certain Facebook profiles. You can actually switch that off in Facebook and say don't let search engines index my content. So you're not going to be sure. That's the first thing. So you might find a few occurrences of it, but not all of them.

Speaker 3:

But the one that we we sort of, really sort of go to town on on our training is is is the metadata, so the data. Metadata is data about data, so not just the photograph itself, but the time, the date, the location, the device, that type of stuff. Now, certain platforms like Facebook. If you upload a photo to Facebook, we're not going to be able to get that information. But on blogs, on websites, that information quite often is available.

Speaker 3:

So one of the things I would say is, if you are publishing stuff, make sure that that information is either stripped out or edited to the level, and that's where you need a little bit of technical help, um, and there are some tools that will strip the data out, um, but really sometimes you need some data in there, but not the stuff that's going to tell you where the photograph was taken, for example. So there's a lot of stuff around imagery, um, that that can be harmful, um, but again it's back to the common sense approach is don't be publishing anything that you you know that would be harmful. But also, you know, um, due diligence is important. You know regular due diligence, regular searching, um, the image search tools are really good now, so they will find stuff even if an image has been cropped. So there's a lot that can be done.

Speaker 2:

Yeah, but when we talk about images then I'm conscious of time, but can we just kind of touch on things like deep fakes, because that's again a topic that people worry about, and deep fakes videos, all that sort of stuff?

Speaker 3:

Yeah, so nothing is ever what it appears to be, and we use something in investigations called the ABC Assume nothing, believe no one, check and challenge everything, and there are a few variations of that, but they're all the same. So we would never assume that something's correct. We don't believe it and we would check and challenge. There are some really good AI detection tools now that you can use. Some of them are commercial, but we use a couple of them and we did a piece of work for you recently on this where we can potentially detect whether an image is AI created or not. They're not 100% reliable, but what they do give you is a percentage of how likely it is, and this works with AI generated text as well, by the way, but they're looking for the features that AI is developing and getting better and better and better. So, yeah, they're problematic. They're problematic, um, and you know they're often used in harmful ways.

Speaker 2:

So fake videos. You know, deep fake videos which, okay, one side is is a tide that a lot of people talk about is the kind of pornography that's been created for, for, um, celebrities, etc. The other side of it, which we've dealt with a lot of people talk about is the kind of pornography that's been created for celebrities, et cetera. The other side of it, which we've dealt with a couple of times, is when organizations are targeted with deepfake videos purporting to be a CEO or someone else. What's the defense against those?

Speaker 3:

Well, there isn't one. I mean, you just got to be cautious. I mean, at this point, I mean I was talking to somebody at a conference a few weeks ago around this, where they were. A guy was called by on a voice call, on a WhatsApp voice call, and the voice appeared to be the person that he was was expecting to call, but but clearly it was a scam. So so you know, this is very, very difficult, because what happens is you're generally uh, in that instance, you're generally called when you're on the move, yeah, you're at a railway station or you're at an airport lounge or something similar. You know, and you, you, you haven't really got the time to check and challenge, and why would you? You know, you've got other stuff going on. The number is the person you're expecting, the voice sounds like the person. So you've got all of these issues to contend with.

Speaker 3:

I think you know it comes back down to the real world security. You know, am I comfortable giving this information out? And you and I have had this discussion before around. You know what information people pass over whatsapp or on a voice call? Um, yeah, the platform is secure, but what about the people around you? You know? So that's what we've always got to be thinking is what is the security risk to this conversation that I'm having now? Why am I having it now? Is it the? You know how do we verify? Is it an unexpected call? And when you talk about the videos, you know, is there anything in the video that is unusual? And I think that's what it is. It's not the actual quality of the video, it's the behavior. What is being said? How is it being said? What is the language being used? Is that a different type of language? We've got to be switched on around it. Phil. We can't.

Speaker 2:

There's no easy fix yeah, I mean, I know that one of the things we use in terms of advice around that is is you know much like kidnapses, kind of code words and stuff like that. If you're going to be you know which again is the checking phase of it. So, john, listen, you know this has been two really, really fantastic um sessions and I should have asked you this on the last, on the last podcast as well. But if people want to get hold of you, can you? You know where do they get hold of you? Can you give us a snapshot for people of what sort of courses you do run um? And again, yeah, where do they find you?

Speaker 3:

yeah, I mean the website is the easiest way cyberopsglobalcouk. Um. It's got a contact page on there. Um, you'll find me on linkedin, john blake. Um, and one of the one of the things I would say about the training is we do individual training programs. They're all on the website. But we also do something that we call a cyber investigator pathway, which is good for teams. So it's basically you pick three courses that fit your world and then you pick two additional that you that fit your world, and then you you pick uh two additional sort of add-on courses and then that gives you an accredited um cyber investigation certificate. So what we're always trying to do is is trying to um we're we moved away from generic training a long time ago. We encourage people to take the pathway. Um, if people uh want training, we we can, um we can customize it to their, to their platforms. You know we're not, we're not talking about generic training anymore. This is um. This is all specific, bespoke training to particular environments so you've got different modules that people can take.

Speaker 2:

Yeah, that will kind of create a particular specialism yeah, exactly so.

Speaker 3:

For example, you know you might want to do a basic open, know you might want to do a basic open source course. You might want to do a basic cyber investigator course undercover, online and then you might want to do intelligence and producing in court and that's all on the website, you know. But but it's more about you know people understanding that this is such a broad world now there isn't one size fits all.

Speaker 2:

Yeah, yeah, I think that's really good advice, john.

Speaker 2:

Thank you for that. We're going to put all those details in the show notes so everyone can access those. In terms of John's LinkedIn profile, john's website, cyberops Global, where you can find all the details of those courses, I certainly recommend John. We use John. We wouldn't use him if we didn't trust him and if we'd been using him for five and a half years. So we clearly have had some great results.

Speaker 2:

A lot of the work that we're doing involves elements of open source intelligence, be that some of the vulnerability assessments we do on clients to see what's out there about them, some of the elements that we're doing around identifying people of concern and then effectively kind of lifestyling them online so that actually we can we can then assess what their normal behavior is compared to their behavior that might be causing problems, and so it's a whole world and, you know, really important subject. This, again, like everything, I would suggest that there are lots of people out there doing it. There are a few people that do it exceptionally well and John is certainly up there with his team. So, John, thank you very much for that. Just a reminder to everyone to subscribe to our newsletter, Diffused News, which you can do on our website, which is out every Monday morning. It's got rave reviews, really informative stuff.

Speaker 2:

And then, of course, this podcast which we'd invite you to subscribe to and you know, review. Drop us a review, see what you think of each episode. So, john Blake, thank you once again, mate. Thank you Phil, once again, mate. Um, welcome, really, really interesting.

Speaker 1:

I've learned a great deal thank you for listening to the diffuse podcast with host philip randell, ceo and founder of diffuse. Please rate, review and subscribe on your favorite podcasting platforms.

People on this episode