How an Australian Surveillance Bill Will Affect Encryption Worldwide

2018-11-16 - 7 minutes read

Authored by Michael Specter

This week, the Australian parliament will hear testimony on a bill that, if enacted, will regulate the design and operation of encryption systems to meet law enforcement surveillance needs –- my group’s own Danny Weitzner will be testifying. The bill gives the government broad powers to demand that software vendors and service providers, such as Google and Apple, supply methods to provide access to encrypted data through secret orders called Technical Capability Notices or TCNs. As we’ll explain, while the bill has some improvements over similar pieces of legislation, the anti-transparency provisions create very real risks to vital new security techniques deployed on the Internet all around the world.

A major downside of the bill is that it imposes onerous secrecy provisions, with severe punishments (such as five years imprisonment as per section 317ZF of the latest text) for anyone who discloses information about the surveillance requirements –- the TCNs. We’ve analyzed the technical implications of this part of the proposal, and together with a number of leading computer security researchers, have written a letter to the Australian Parliament explaining the risks of the secrecy penalties. In a nutshell, the bill’s secrecy provisions could make a number of state-of-the-art security features deployed on the Internet illegal. As you’ll see, this stance will inherently make us less secure.

Image depicting locks.
The Australian Telecommunications and Other Legislation Amendment (Assistance and Access) Bill 2018 may decrease Internet security. Image by Christoph Scholz and licensed under CC BY 2.0.

Although this is yet another attempt to regulate encryption –- something that many have considered a fraught proposition –- there are good parts to the bill, which reflect a more mature understanding of the technical risks of regulating cryptographic systems. The proposed law requires that any TCN must be “practicable” and “technically feasible,” while also requiring that the request show that the data could not be collected via any other means. Even better, the text clearly states that a service provider “…must not be required to implement or build a systemic weakness or systemic vulnerability” in response to a TCN. This acknowledgement is a huge step forward in the encryption debate and puts the regulation on much better footing than it would have been otherwise.

It is also important to acknowledge that there is a very legitimate need for secrecy from the law enforcement and national security community. A core asset of intelligence work is to keep sources and methods of access secret –- letting the bad guys know when and how you are spying on them has obvious downsides.

However, while we understand the impulse to keep TCNs secret, there’s a real cost to doing so.

Considering Transparency Mechanisms

The problem is that the secrecy requirement is inherently incompatible with transparency mechanisms –- a series of tools cryptographers use to enable users to audit those responsible for managing crypto keys. When the critical keys users depend on are stolen, impersonated, or otherwise misused, there is little a user can do to detect it. Transparency mechanisms help solve this problem by making it impossible to hide the abuse of a key, removing some of the trust required by users on key holders.

Transparency mechanisms generally work by having some sort of cryptographically signed public log of sensitive actions. This log is then copied by third parties and, when a user’s device sees this sensitive action, it checks if that action is legitimate by determining if some number of these third-party logs have copied it.

Software updates are a good example of this technique. Phone manufacturers provide you with updates to your device that are cryptographically verified by your phone prior to installation. If an attacker were to break into Apple or Google and steal the keys they use to provide the signature for verification, that attacker could sign a malicious update, which your phone would happily install. The attacker could then control your phone remotely and access any information or services you can get from your phone –- in other words, almost anything. The problem is that there is no way your phone can know that it is the only one that has seen the update, and there is no way for Apple or Google to know that their keys have been misused.

With a transparency mechanism, service providers like Apple or Google would provide a cryptographically verified public log of updates. Before blindly accepting that update, your phone could then check a copy of the log provided by, say, the EFF, distrusting governments, or any number of institutions. If the update does not exist in the log, your phone could alert you, refuse to install the software, and report the failure to the manufacturer.

There is no universe in which a “secret” update can exist and a transparency mechanism still be useful; these are mutually exclusive functionalities. Should a TCN require a secret update, it must either break the guarantees of the transparency mechanism, or it must be publicly divulged.

Updates are only one example of using transparency to aid with authentication. Other examples include certificate transparency and encrypted group text messages.

The potential for harm has been realized many times. In my master’s thesis, I chronicled eleven instances over the course of ten years where the arbiters of web authentication (called certificate authorities) broke the trust model of the web. Stuxnet, an incredibly sophisticated malware variant, famously leveraged a stolen code signing key from a manufacturer to bypass Windows’ code authentication mechanism.

We’re confident that transparency mechanisms will be necessary components of future security mechanisms. That’s why I, along with other security researchers, cryptographers, and academics, have signed a joint letter to the Australian government expressing our concerns.