Security researcher Troy Hunt recently discovered one of the largest online troves of leaked personal information in history — a collection of nearly 773 million hacked e-mails and passwords. Hunt’s discovery stresses a point that’s been evident for some time: Once information is digitized, no one can fully guarantee its safety. So how do we fix our cybersecurity troubles? In two words: Slow down. Put simply, the time has come to more purposefully control what it is we digitize. This means slowing down the pace of adoption of networked technology with new laws and standards aimed at increasing the quality and reliability of any device with an IP address. And it means carefully preserving analog capabilities, even as we embrace the digital.
Security researcher Troy Hunt recently discovered one of the largest online troves of leaked personal information in history — a collection of nearly 773 million hacked e-mails and passwords.
Hunt’s discovery stresses a point that’s been evident for some time: Once information is digitized, no one can fully guarantee its safety.
So how do we fix our cybersecurity troubles? In two words: Slow down. Put simply, the time has come to more purposefully control what it is we digitize. This means slowing down the pace of adoption of networked technology with new laws and standards aimed at increasing the quality and reliability of any device with an IP address. And it means carefully preserving analog capabilities, even as we embrace the digital.
Evidence of our inability to fully secure digital systems is all around. In November, for example, 52 million users at Google were compromised in a data breach. Two months earlier, 50 million user accounts were compromised at Facebook. These two events bookended an October Government Accountability Office report, which asserted that nearly early every single U.S. military weapons system suffers from a cybersecurity flaw. If organizations as advanced as Google, Facebook and the U.S. military can’t keep their systems safe, no one can.
As it stands, once something has been turned into computer code — by cameras, recording devices, keyboards, or sensors — nothing can be done to “certify” its status as secure. That information may be viewed or corrupted by unauthorized entities or used in ways that violate the privacy or trust of the individuals who generated the data.
We call the current, woeful state of our collective cybersecurity “flat light,” a term from the world of aviation. Among pilots, flat light signifies a near total lack of bearing where all directions look the same — a condition that now applies to organizations, regulators, and consumers alike within the digital world. Collectively, we simply do not know what to focus on, what precise action is called for, or how to protect all the data we generate, both as individuals and as companies.
The root of our cybersecurity problems is the unprecedented rate at which we’ve embraced networked devices. Buying a lightbulb? It may now be connected to the internet. A refrigerator? The same. A toilet? Soon enough. In some places, we can’t even make purchases without using network-connected credit cards or services like Apple Pay.
No technology in human history has been adopted as fast as the internet. It took over 100 years after the invention of the telephone in 1876 before its near universal adoption in the United States. Analogous adoption timeframes followed for electricity and automobiles. Yet it’s taken only one decade for nearly four in every five Americans to own a smart phone. Cisco predicts a global jump from 17.1 billion connected devices in 2016 to 27.1 billion networked devices in 2021 — a compound annual growth rate of 25%.
The only way to reclaim control over this environment is to more meaningfully manage what it is we digitize — in other words, to carefully decrease the pace of adoption of networked technology. In an economy that rewards speed to market over all else, the security of our software is consistently deprioritized, if not simply ignored, as the adoption rates of new technologies increase. Slowing down this rate of adoption — by implementing new laws and standards and by ensuring that analog alternatives to select technologies are preserved — is the best way to insert a sustainable level of security into our increasingly complex cyber environment. Only then might we at least understand and gain some degree of control over our growing digital dependence.
The burden of regaining control of our cybersecurity environment falls on governments, companies, and individuals alike.
To start with, laws must mandate that any system with network connectivity either have a finite lifetime or accept updates. Currently, the list of devices that can’t be updated when flaws are discovered is long and growing longer. Among devices that used Google’s Android operating system in 2016, for example, a reported 29% could not be patched.
Faulty devices can be taken over by malicious actors and used for ill, as in the case of the 2016 “Mirai” botnet that disconnected huge swaths of the internet. The combination of immortal and unfixable is societally untenable; it should be illegal, too.
Even if laws are slow to enforce this mandate, companies and individuals would both be wise to let this guide their decisions: If the device they want can’t be updated and doesn’t die, it should have no place in their organization (or their home).
Second, liability for cybersecurity flaws must be made clear, and software makers whose code causes glitches must be held to account, just like producers of other consumer or industrial products. Today, most penalties for cybersecurity defects relate to either failures in reporting after breaches or to misrepresentations in a product’s the terms of service. Neither contributes to safer code.
Reporting mandates, for example, only penalize organizations that fail to disclose breaches once they’ve occurred. And relying on the terms of service between a software company and an end user ignores the fact that users have zero bargaining power when licensing software — they can only accept the terms offered or do without. This is why it’s so rare that any one organization is held liable when software proves insecure. Clarifying what constitutes minimum cybersecurity requirements, much like California did in a new law last year, and standardizing liability when these benchmarks are broken, will help remove shoddy software from the market, making us all safer in the process.
Last, governments and organizations alike must ensure that analog counterparts to digital capabilities are preserved even as we embrace new technologies. In almost all cases, software mechanisms have “common mode” failure paths between them, meaning if one service fails, the others do too. Existing analog services, on the other hand, in almost all cases, do not.
Instead of focusing solely on security, we must instead expand our efforts to ensuring that critical activities have alternative methods for operating should a cybersecurity flaw remove them from service. Similarly, information security teams within organizations should focus not simply on protecting their organization from the attack of the day; they should also ensure that alternate, analog means for operating exist when a cyber catastrophe arises.
The widespread use of networked devices has created enormous benefits. We must not turn back the clock on these advancements. Nor can we.
But when we embrace new technologies too quickly, as we have with internet-connected devices, we can all too easily overlook the tradeoffs we’ve made. Over the last decade, we’ve collectively chosen connectivity and convenience over security and privacy. That tradeoff need not be permanent. The choice is still ours to make.
Powered by WPeMatico