Recovering CISO and Director of (TL)2 Security Thom Langford joins the show to debate Tripwire’s Paul Edon on facial recognition vs. security.
https://open.spotify.com/episode/6fFEo0k55WgGqM6Yzi0Mbi?si=-Bkeyld4QtuTSeq6IKQseQ
Spotify: https://open.spotify.com/episode/5wXKv9DiQjfsZNf6heXg67
Stitcher: https://www.stitcher.com/podcast/the-tripwire-cybersecurity-podcast
RSS: https://tripwire.libsyn.com/rss
YouTube: https://www.youtube.com/playlist?list=PLgTfY3TXF9YKE9pUKp57pGSTaapTLpvC3
The following is an edited excerpt from a recent episode of Tripwire’s Cybersecurity Podcast.
Tim Erlin: Welcome everyone to the Tripwire Cybersecurity Podcast. I'm Tim Erlin, vice president of product management and strategy at Tripwire. Today, we are here to talk about facial recognition technologies and use cases. I am joined today by a couple of esteemed guests.
Thom Langford: Hello again, Tim. My name is Thom Langford. I am the director of (TL)2 Security and a recovering CISO.
Paul Edon: Hi Tim. My name is Paul Edon. I'm the senior director for technical services for the EMEA part of Tripwire.
TE: Excellent. So as I said, we're here to talk about facial recognition. We're setting this up as a little bit of a debate. We've positioned Tom as being against facial recognition and Paul as being for facial recognition. Neither of the two are taking extreme positions, but we want to foster a debate that shows both sides of the conversation.
Use Cases of Facial Recognition
TE: Why don't we start with conversation around the use cases that we're starting to see in the real world for facial recognition?
PE: Before we jump into that, can we just highlight the fact that facial recognition is used in two main areas, verification and identification, and they are both very different. Obviously, verification is more the type of thing you do on your phone or your iPad where it's just comparing your biometric data against biometric data that it holds for you personally on that device. So verification, not such a big issue. Identification, on the other hand, is where it's maybe taking live pictures of people. It's creating the biometric data, and then it's checking it against a huge database of data. And I think that's where most of the challenges come from or most of the argument comes from.
TL: I do agree with that. My opening gambit was going to be that facial recognition is there. Pandora's box is opened. It ain't going away. It's about how it's implemented, and it's, as you say, the difference between verification and identification. I think the key thing for me, though, is how that data is handled. As a consumer, I trust in ‘Apple’ to ensure that data remains on that device and within my control. The challenge for me is that if Apple’s ethos changes, and they start selling that data to governments, then it can then subsequently be used for identification.
PE: I would agree. For me, the issue isn't about facial recognition technology. It's more about the holder of the data. I think most people's concerns are around government ownership and private company ownership because governments have proved to us time and time again that they're not really trustworthy. And so have private organizations.
TE: I think Apple is a terrible example in this case because they're one of the few technology companies that you don't have to worry about getting acquired by someone who changes their core ethos. But if you think about a smaller company, then you have a whole different problem. Regardless of what they lay out as their initial terms and services, the potential for that to change certainly exists in a world of acquisitions, right?
TL: It does, but that's less of an issue as long as the environment in which they operate is managed properly. For instance, regulation might come about in how that data can be managed and processed and sold. That's largely in the hands of the governments, and the governments are the things that change even more frequently.
PE: I agree. But I would say that that regulation in some areas is already in place. GDPR covers facial recognition. We just need to make sure that that regulation is available across the globe.
TE: Right. Regulation is geographically limited, and there are a number of places around the world where GDPR might not apply or it could be difficult to apply. There are countries where data privacy is handled very differently from the European Union.
TL: What happens in 25 years when the political spectrum in Europe has changed to the point where we're striking out GDPR because the terrorist threat from whomever else is so great? We have to have facial recognition and keep records of everybody just in order to fight the good fight against this unified threat or whatever. Things change on a regular basis.
PE: But that would fit within current regulation anyway because that would come under the public interest. I'm a strong believer that you need to have this available. If we were to take that away from immigration and the police and the military, then we're opening ourselves up to those threats from known terrorists who can just walk through airports. What I'm not for is private organizations or even government organizations that are not using it for the right reason. If you're using it to defend or protect the public, that's fine. If you're using it for anything else, then I think that needs to go through a very strict review process.
TL: But who defines it’s for the right reasons? It's the individual countries and the governments that they have in at the time. We have to be careful with the fact that the people who define what is a good use and what is a bad use are not the people whose faces are involved.
PE: Well, it shouldn't be the government. That's for certain. It should be an independent organization appointed by the government.
To Ban or Not to Ban Facial Recognition Technology
TE: I think your point is that whatever regulation you put in place, there is the potential for that regulation to change when the government changes. What's the alternative? Is it that we ban the use of facial recognition entirely? Or is there an alternative that provides some level of appropriate use without government regulation being the enforcement mechanism?
TL: I don't think there is an alternative. I think the key part of this is people understanding what those long-term implications are.
PE: I don't think most of the public realize that facial recognition software is used in things like hospitals to diagnose people with rare genetic diseases. It's used to enable blind people to communicate because it detects a person's facial expressions. There are so many different use cases for this that the public are not aware of. Now, what people are scared of is the fact that it's used to identify people, and it can make mistakes. It can identify the wrong person. And that's true, but come on, let's be realistic. In this country and in the U.S., you don't just arrest people, put them in prison because you've got a photograph of them. You need evidence.
TL: It reminds me of an article I read that talked about how society rails against new technology and how the same arguments are bought out time after time. Things like, “It's going to rot the brain,” “It's going to cause laziness,” “It's going to cause people to become mindless and to lose their imaginations.” When you compare statements that are made over time, they all focus on appealing to our base fears and are based upon a lack of knowledge. Facial recognition could be a very positive thing. When it's used right, it’s a massive enabler of people. It can increase productivity. It can increase your leisure time versus the amount of time you work. It can increase your wellbeing. But you can only use it properly when it is managed, when people understand what it is and can actually mold the society.
PE: I agree. It's up to our society to set clear boundaries that will enhance the positives and control the negatives of any new technology. We need the checks and balances, but it can't be the governments that decide what those checks and balances are.
TE: You’re proposing some kind of non-governmental independent oversight organization, right? I don't see how it's practical because they would rely on the government for the ability to enforce anything they're doing, anyway.
PE: Well, they rely on the legal system, and the legal system is independent of government or should be. GDPR regulations were not put together by government. It was put together by civil servants who worked for government, and a lot of independent advice was taken to put those regulations together. Anyone who's read it well will know that in general it is exceptionally good for the privacy of the individual. It does not give a lot of leeway for governments or other organizations to abuse the data that they hold on individuals. And I think that's what we need. I think GDPR is a good starting point, but I think we need something focused on biometric data. There needs to be legislation that says not just what can and can't be done with it but who can and can't do that.
How to Avoid Instances of Abuse
TE: History has demonstrated that abuse is going to happen. What I'm thinking about here is the role that third-party providers play because that ownership of the data is key, and governments are not building and developing facial recognition technology. There are third party companies that are doing that, and their goal is to make a profit. So how do you manage that relationship and avoid instances of abuse when you have third parties involved in managing that data?
PE: That's really difficult. The best way to do that is you have a number of licensed vendors who are able to develop that solution.
TE: Yeah. You'd legislate that, and you'd have to audit it. Ultimately, technology does get out and the same would be true of facial recognition.
TL: Absolutely. There is a commercial interest in making the very most out of it. And if they are empowered by governments to act on their behalf or have a license to operate, companies will push those boundaries. The cat is out of the bag. It's going to happen. We just need to be prepared and educated to understand what's going to, you know, what is possible with this data and make sure that the people who we are electing to look after this stuff are actually doing so in, in our interests and not just in the interest of corporations.
TE: So are we saying that now that this technology has been developed that there's effectively no way of actually stopping it from being used?
PE: Yeah. You can make it illegal, but that won't stop it from being used.
TE: Well, I don't know that we've come to a clear conclusion, but there's certainly a lot of interesting data to look over, interesting use cases out there. I want to thank you both Thom and Paul for joining us on this podcast. It was an interesting conversation, and I hope everybody who listened got something out of it as well. It's a topic that we'll continue seeing in the news and continue talking about for a long time.
Editor’s Note: The opinions expressed in this guest author article are solely those of the contributor and do not necessarily reflect those of Tripwire, Inc.