As part of this two-part series, let’s now look to another exhibit demonstrating of how people act as the first, last and best data and privacy defense.
Exhibit B: Potentially Unwanted Leaks
If you have some technical literacy, you may have heard of potentially unwanted programs (“PUPs”). It’s all that glop and gloop – malware, viruses, trojans, worms – that can cause havoc on your devices and systems. More often than not, PUPs end up on your system because of – you guessed it – the operator doing something silly. Clicked a link they shouldn’t have, opened up an attachment from an unknown sender, downloaded and installed sketchy app. You name it. Here’s our advice for this type of basic operator fail: make sure your OS is patched and updated, make sure the signatures of your antivirus are up-to-date, make sure your firewalls aren’t a sieve, and if you are one of the lucky ones who has some AI at your disposal, make sure it picks up and where possible stops nasty traffic. If you are an everyday user, that’s about all you can do after you’ve done something you shouldn’t have. That’s about all we have to say on PUP in here, except for this: it is the “potentially unwanted” phrase that catches our attention, as it captures the idea of non-linear costs. Because if something can be “potentially unwanted,” it also carries the potential to cause you problems if not immediately at some point in the future. In other words, costs that can be both unpredictable and incalculable. That’s why we identified something else that is potentially unwanted: leaks of information. To clarify, we are not talking about the breaches you hear about in the news. What we are talking about are those tidbits of information – those breadcrumbs – that you and others leave behind, like comments on web posts of what you were doing that day, pictures of you on social media getting a little rambunctious, or partial information that websites leave open to the public, such as part of a Social Security Number or user preferences. Why do the crumbs matter? It is because when pieced together, they give an attacker a formidable weapon to use against you, particularly in a social engineering attack (phishing, spear-phishing, and pretexting as very common tactics). The “collecting the crumbs” approach is extremely powerful because it’s not only the good guys who have AI but the bad guys also. AI can be used to scour, collect, collate, and develop a picture of you that can be shockingly accurate and of course used against you. And the thing about these crumbs is that they are normally made available, in one way or another, because of one reason: Homo sapiens. Go back to the “convenience” conversation. Sure, some argue there is an upside of us being able to carry a camera in our pocket and post to social media with a touch of the screen, but we have also not understood the cost of that capability. And why can’t we? Because that picture, post, or tidbit of information about where we went last night has no set or agreed “value” at any given time or between parties. Don’t believe us? How much do you value your family photos? Chances are more than us two. But if we all had similar standards of living and had a relatively equal financial position, we’d be pretty consistent on the value of $100,000 to each of us individually. That is why these crumbs that you leave behind – and others leave behind of you – individually may have no intrinsic value but when pieced together generate some different value. In the moment, that collated picture of you could be used in a social engineering attack in order to gain access to your employer’s network. But over time, the value of that collated picture of you could change. Instead of a using it for a social engineering attack to simply access your employer’s network, it is used for blackmail to manipulate you into more specific action. That’s the problem with these potentially unwanted leaks. There is no metric to determine their value. There is no market index to compare against. No “information” gold standard system. This is what makes the privacy discussion so difficult because privacy, as many of us know it, is regrettably dead in large part to operator habits.
Because You Cannot Determine Cost, It Becomes All About You
No need to give a detailed account of what you have heard in the news the last few months regarding social media and privacy. But a brief summary: the industry moved so fast, altering our habits just as fast, that we never had the opportunity for a truly honest discussion about how information gets gathered, used and retained and how those processes could alter our behavior. That is why, until proven otherwise, you should consider accepting the following as truths:
- 15 pages (give or take a few) of terms and conditions agreements don’t count as “clear.”
- Free is never free. There is a cost somewhere. You just haven’t identified and weighed it.
- Your mobile devices are information ticking time bombs because there’s a good chance you have no idea what information your devices and apps are scooping up.
- Not only do you have to worry about your data protection habits, you really need to worry about the person you are communicating with. Remember, the receiver also has a copy of what you are communicating, and their information protection habits may not be as good as yours. It’s a pretty old tactic: if you can’t get who you want to get at directly, you do it indirectly.
- If you expect tech to do all the heavy lifting for you (which still will not guarantee protection), you should also accept to forego any claims to privacy. That’s not a statement reflective of legality or universal rights. That’s a statement of practicality.
Let’s quickly expand some of these truths. This two-part series is about 3,500 words. There is a lot to consume here. But saved in a plain text format, this essay is only about 20 kilobytes. (Funny tidbit: when we encrypt the plaintext format, the file size goes down to eight kilobytes.) In other words, it is nothing more than a speck of dust in the data world today. Okay, maybe closer to the size of an atom if we’re pumping out 50,000 gigabytes per second. (For those having difficult with the conversion, that’s 50,000,000,000 kilobytes per second.) But what if these 3,500 words were not an essay on cybersecurity? What if they were key corporate documents? How about intellectual property? Perhaps mission orders? Your crown jewels? Remember, formulas of just a few ingredients – like those of fast foods or beverages – are sometimes held on to like the nuclear launch codes, limited to just a few people. In some cases, formulas like these are still written on paper and passed on verbally, a classic and prudent example of knowing when not to use technology. The information need not be plentiful to be valuable, a nuance which sometimes gets lost in the conversation of “big data.” Therefore, short of reading all traffic on the network (a privacy issue), even the best AI will not be able to stop those 20 kilobytes of plaintext from getting out into the wild, but you can do so based on how you interact with technology. That’s why we – Homo sapiens – have a responsibility to keep information secure by going back to using technology as a tool and not a crutch. Our only other practical alternatives are:
- Normalize theft as an expected part of business and somehow try to incorporate that cost into our business models; (We can’t because it may be a non-linear, meaning non-calculable, cost.) or
- Give up all expectations of privacy (gasp) and let AI and algorithms act as gatekeepers, literally analyzing every last kilobyte of data. (Kind of smashes the efficiency discussion, too.)
Neither option is particularly appealing to us. And apart from demonstrating that the human factor complicates the problem – but is a fact that must be absolutely dealt with – the other idea that should be taken away from this piece is that costs may not be calculable. That should scare you, and here is why: information losses may not always be static, meaning that when x is lost, y is the expected result. Given the intangible value of information, the loss has the possibility of being dynamic, meaning that when x is lost, y may happen, which could trigger a, b, and c but z may also happen and influence the result, triggering d, e, and f all of which out of our control. This is why we close off by saying you – the operator – need to make judgment calls. Err on the side of caution. This is a behavioral change that will not come easily. In some cases, you may learn your lesson by burning yourself. In other cases, something more punitive may be required. Most of us would not keep our job if we were responsible for locking up shop but every once and a while forgot to do so. We understand this is tough because we need to think in multiple steps and long-term, neither of which Homo sapiens are particularly good at. But the alternatives mentioned above – normalization of theft and giving up control to the machines – are not good alternatives. It’s up to us. We – the people – will always be the first, last, and best defenders of information. About the Authors: Paul Ferrillo
is counsel in Weil’s Litigation Department, where he focuses on complex securities and business litigation, and internal investigations. He also is part of Weil’s Cybersecurity, Data Privacy & Information Management practice, where he focuses primarily on cybersecurity corporate governance issues, and assists clients with governance, disclosure, and regulatory matters relating to their cybersecurity postures and the regulatory requirements which govern them.
George Platsis has worked in the United States, Canada, Asia, and Europe, as a consultant and an educator and is a current member of the SDI Cyber Team (www.sdicyber.com). For over 15 years, he has worked with the private, public, and non-profit sectors to address their strategic, operational, and training needs, in the fields of: business development, risk/crisis management, and cultural relations. His current professional efforts focus on human factor vulnerabilities related to cybersecurity, information security, and data security by separating the network and information risk areas. Editor’s Note: The opinions expressed in this guest author article are solely those of the contributor, and do not necessarily reflect those of Tripwire, Inc.