The Endless Conundrum of creating a secure PinePhone

Publish date: 2021-06-24

A few days ago, a friendly face joined the Pine64 Development chat room, where developers meet to talk about creating software for many of Pine64’s more complex consumer products (the PinePhone, PineTab, PineBook, and similar). They had a question that sparked days of debate and research:

Is it possible to perform Verified Boot on a PinePhone?

The question seemed simple enough. Verified Boot – sometimes called or trademarked Secure Boot – verifies that the owner of a system approved a piece of software to run, or boot, on that system. This is usually achieved by checking a set of (practically, for now) unforgeable signatures provided with the software. The theory goes that the device owner had to create the unforgeable signature, and the device owner would not create that signature if they did not approve of the system running the signed software.

The implications of such a system are massive: if I know that the software running on my device was approved by me, I can be reasonably sure that I haven’t accidentally installed malware on it that can hide from me. By extension, I can be reasonably sure that no one else knowingly installed malware on my device without me noticing. They might have done this (for example) over the internet, with a Bluetooth vulnerability, or even by physically taking my phone for a short time. But I’m pretty sure that, every time I boot my phone, the software is in an approved state up until my verification ends.

The answer to the original question seemed simple: yes, the Allwinner A64 System on Chip which powers the PinePhone supports verified boot. It says so right in the discussion of ROTPK_HASH on the linux-sunxi wiki. All you need to do is create an asymmetric 2048-bit RSA keypair, take a hash of the public key of the pair, place the hash into the write-once storage on the A64, and boom! Now the A64 will only boot software which carries a signature corresponding to your private key. From there, you can use a series of open source bootloaders and firmware to ensure that only the software you approve runs on the PinePhone.

That answer was unsatisfying for primarily social reasons. Verified Boot removes the device purchaser’s control of the device, making them a tenant rather than an owner.

Verified Boot has risks that PinePhone buyers may not be comfortable with.

The A64’s Verified Boot scheme isn’t much different from any other ARM64 SoC: a key placed into write-once storage is used to verify software on writable storage. This is actually a standard on ARMv8-A platforms, called Trusted Board Boot Requirements (TBBR).

The main problem is not with the implementation; it lies solely in design. TBBR was written with the assumption that a person who purchases a device may not be the owner of that device. Instead, the device’s owner is whoever holds the key written into the write-once storage. In almost every case, that will be the device’s manufacturer. This model turns the purchaser of the device into a tenant rather than an owner: the owner is allowing the tenant to use some of the computing resources on the device.

Make no mistake: This model has been excellent for the security of everyday users who are in situations that respect their human rights. It is, on balance, a good thing that every time the FBI obtains an iPhone of a suspected terrorist, they get into a months-long public opinion battle about whether they should have the tools to unlock its encrypted storage to get to the potential evidence it contains. If stealing your naughty texts is that hard for the FBI, it’s really really hard for the guy who steals your iPhone in search of a quick buck. Apple being the owner of your iPhone and allowing you access to its computing resources means that you, a fairly average person doing fairly average things in a fairly well-off country, see tangible benefits.

However, things are less sunny for users of Apple devices in markets that don’t respect their citizens. Apple has shown its willingness to bend to the will of China’s government to potentially reduce the privacy or security of users in that region, Wired reported in 2019. To my knowledge, this has not extended to breaking or backdooring encryption on Apple products sold in China. However, trusting a manufacturer with ownership of your device today means trusting them with ownership of your device in several years, too. If there is a negative regime change in your government and Apple decides that the risks of refusing its requests are worse than the benefits of protecting you, you are not getting that protection any more.

My point is not “government bad,” it’s more that even though Achilles seems immortal, his heel is vulnerable. Even the largest giants in the world have their breaking points. For Apple, it’s losing the revenue of the Chinese market. For your favorite open source project or boutique phone manufacturer, it’s a lot less. You need to consider that fact when you choose to purchase devices owned and controlled by that small outfit. I must reiterate that I only speak in such grandiose terms to take the argument of device ownership to its logical conclusion. However unlikely the extreme governmental circumstance may be, the fears that lead to its articulation have roots in routine abuses of power by device manufacturers.

Pine64’s customers are well aware of this trade-off of ownership for security. Many chose to purchase the PinePhone because it does not perform Verified Boot, not because it could. You normally won’t hear this articulated as “I fear my government could control Pine64,” but in more practical terms like, “I want to try out multiple operating systems without hassling with bootloader locking,” “I don’t want the manufacturer to decide when my device should be thrown in the garbage,” or, “my last phone got an update from Samsung and started showing me ads that I couldn’t remove.”

As an aside, there is a fun story developing as I write this. According to the Windows 11 compatibility checker, Windows 11 may require Secure Boot to be enabled before it will install at all. This could extend to Windows 11 refusing to boot if Secure Boot is disabled. Microsoft no longer requires PC manufacturers to allow you to turn off Secure Boot. Many Linux distributions do not have Microsoft-signed bootloaders, so their installers will not be able to boot on new Windows 11 PCs. Users can enroll their own signing keys in Secure Boot, but the technical knowledge required to do so is much higher than what is required to boot a USB stick. Ubuntu, Fedora, Red Hat, and OpenSUSE will be fine, but this will certainly be a loss for smaller distros. This is exactly the kind of concern that people have when we start to talk about enabling Verified Boot.

Becoming even more practical, the TBBR standard has some miscellaneous technical problems. Since the key that is used to secure the boot process is stored in write-once storage, it is impossible to replace the key without replacing the device. This means that if your device’s manufacturer were to leak the key burned into a product, an attacker with the key would potentially have full control of the product (and you still wouldn’t!). If the manufacturer instead lost the key in a freak rm -rf accident, they would be permanently unable to update any of their products. And finally, it is not possible to replace the key every few months to diminish the risk of the previous two circumstances. At best, this is both risky and wasteful for both the manufacturer and the client. Once the key is out, the manufacturer may choose to just recommend users buy a new one, or they’ll have to recall the device. Both are PR fails, the latter is more expensive for the manufacturer. These limitations alone make it a less-than-perfect solution for smaller manufacturers.

There are optional specifications in TBBR which allow storing the key in writable storage, patching bugs in the boot code, and provide other fixes that would help reduce the manufacturer’s risk and give the device’s purchaser more ownership of the device. However, these optional specifications are more expensive to implement and certify. Not to mention that the additional complexity of such an implementation gives more surface area for unpatchable security bugs. Allwinner didn’t think that expense was worth it for the A64, and it doesn’t appear that any SoC designers for a potential future PinePhone did either.

So, due to limitations caused by design mixing with customer opinion, there does not appear to be a purely technical, no-hardware-modification way to protect your PinePhone from early-boot malware. However, that does not mean that all is lost.

Targeting mitigations to risks

At this point in the Pine64 dev chat, we took a step back. Now that we had discovered the constraints on securing the PinePhone (we don’t want to give up total control of the device or open ourselves up to mishaps by enabling Verified Boot), we could start to explore the unconstrained bounds within. People went to work reading scientific papers, ARM specification documents, and Matthew Garrett’s blog so we could better define the threats we wished to defend against. Once we had a better idea of our threat model, we could better target solutions to the problems we came up with.

Spoiler alert: most of the threats we wish to protect against do not involve early-boot malware at all.

Threat model: common phone thievery

Imagine your phone. You’ve stored a number of family pictures, text messages, and other miscellaneous data on it.

You have attracted the attention of a casual adversary performing opportunistic attacks on people at a public gathering space. They do not have any specialized knowledge about computer systems or their security vulnerabilities. They may be a pickpocket taking advantage of today’s giant phones and small pockets. They may be watching for purses, bags, or devices left unattended. But however it happens, they come to have physical access to your phone. In other words, they stole it.

Protecting a device from an attacker with physical access is particularly difficult. For years, the cybersecurity community has agreed that a computer in the physical possession of an attacker is now owned by that attacker. Mitigation efforts for this problem mostly involve protecting the data on the computer from the attacker by putting it behind walls that would take years to break down or destroying it as quickly as possible. The attacker still owns the computer, but they don’t own the data on that computer. This is generally the best-case scenario we can hope for with the PinePhone as well.

The attacker in question isn’t terribly interested in who you are. They don’t care about a piece of proprietary information that only you have. They just care about making a quick buck. So they slap your phone onto their laptop and start rifling through its data. If they can learn enough about you, they might contact you and offer to return your phone in exchange for a ransom payment. Hopefully you care enough about the data on the device that you’ll pay up. If you are particularly well-off and have some particularly sensitive (read: embarrassing) content available to the attacker, they may choose to up the ante with blackmail. If all else fails, they’ll just sell the phone to a pawn shop.

This is a common situation for users to find themselves in, and it has clear benefits for the attacker. According to a report written by Lookout and hosted by the FCC, 3.1 million Americans lost their phones to theft in 2013. The report found that 50% of phone users would pay $500 to get the data back from their lost phone. About a third would pay $1000 for the same privilege. With today’s devices costing more than that when bought new, it’s clear to see that there’s a lot of money to be made in being a casual attacker. In the intervening eight years, this threat model has largely been mitigated by Android and iOS.

The first mitigating feature is a backup service, like Google Drive and iCloud backups. These keep a second copy of your data, so losing your phone does not mean you lose access to the data it stores. The attacker can no longer use the withholding of your data as a bargaining chip.

The next mitigation is device encryption. Device encryption is turned on by default in iOS on all devices and Android on many devices. These systems use a set of keys (hopefully) known only to the device to boot to the lock screen and run basic apps and services, like screen readers and alarm clocks. Another set of keys, encrypted with your passcode, are used to encrypt the remaining data on the device. This means that your attacker will only be able to access a small subset of your data, removing their ability to blackmail you. If the attacker tries to brute force your PIN, they’ll be met with a dreaded lockout screen and will have to wait to try again. If they try too many times, the device may just wipe itself.

Finally, remote killswitches like Google Device Protection and Find My iPhone allow you to remotely locate a device, turn it into an expensive brick, or wipe all of your data from it. Depending on what you choose to do, remote killswitches can remove the attacker’s ability to use any data or sell the device for any worthwhile profit.

Altogether, these options deter would-be phone thieves from stealing a device. They also mitigate the threat of lost or misused data causing harm to you or the organization that owns your device.

Today’s mobile Linux operating systems are not looking too safe when they try to protect the user’s data from such an attacker. While security through obscurity may hold for the most casual (they can’t mount your F2FS SD card on Windows!), relying on that fact is not safe. Some OSs include support for full disk encryption with LUKS, which will probably deter enough attackers that they’ll just factory reset your phone and move on. But even a casual attacker could know enough to plug in a USB keyboard emulator that types in PINs for days and waits for the magic to happen. I’ve only seen brute-force prevention working in Ubuntu Touch, and it does not have any encryption support. Without brute force prevention, your device could be unlocked in minutes or hours. Without encryption, an attacker could pop your SD card into their computer and read all your secrets. Both should be combined to make things safer from our simple attacker.

Obviously, protecting against these unskilled attackers with physical access is the best first step on our security journey to a secure PinePhone. We’ll need to develop brute force prevention strategies and start with enabling basic full-disk encryption (even if it’s only protected by the user’s short PIN). But let’s assume that we’ve successfully mitigated the threat of these attackers. Let’s move on and consider what we’d do to protect the device against some more skilled attackers.

Up the ante: spicy phone thievery

Now we move past the simple attacker and on to someone a little more advanced. Suppose that you work with sensitive customer data for a business, like legal names, addresses, and credit card numbers.

Suppose that your attacker is now a small organization or a single skillful person who has physical access to your PinePhone with known sensitive data on it. The attacker knows that the data on the phone is far more valuable than the phone itself. Maybe because they saw you in a fancy suit when you accidentally left your phone behind.

This is a case where securing the PinePhone becomes much more difficult. Our best-case scenario is still forcing the attacker to give up, wipe the device, and sell it to a pawn shop. However, this attacker will be a little more careful with the device: they won’t go charging in and trying a bunch of PINs to see if it wipes itself. They might do a little research on the PinePhone and notice that they can boot any operating system with just an SD card. They might download one and boot it on the device, much like someone who steals a laptop might boot up an Ubuntu live USB for a little digging. They are no longer restricted by your brute-force prevention or automatic wipe.

It’s probably of no surprise that encryption of all user data is still a good start on solving this problem. LUKS with a sufficiently long passcode, a strong key stretching algorithm, and a strong cipher will probably be unbreakable by our supposed attacker with some know-how for many years. However, we’re talking about a smartphone with a potato for a CPU. Since the smartphone has a touchscreen, your passcode is probably quite short and consists only of numbers for your convenience. The key stretching algorithm causes a delay between entering a password and it being accepted, so it will need to be set to a very low number of iterations on the PinePhone. Otherwise, it will take an unbearably long time to unlock your encrypted storage. Together, these problems mean that the attacker can go to town on brute forcing the encryption password for the disk.

I think that a very good start to protecting your phone from this attacker is to create two passcodes: one longer passcode that you must enter on a reboot, and a shorter passcode that you need to enter to unlock the device. The longer passcode must be used to encrypt your data. The shorter passcode does not necessarily need to. Then, we beef up our brute-force prevention a bit: instead of locking you out of your device when you enter your short password too many times, we reboot the device instead. Now the attacker is really in trouble: they need to figure out your much longer boot passcode to unlock the device.

For convenience’s sake, let’s assume that a cold boot attack is not possible here (since I know someone will say it). Let’s say that we implemented a way to put the decryption key at a certain point in the A64’s SRAM that is used by its boot code so it will always be overwritten on boot no matter what. If the attacker reboots the device themselves, the key is erased from memory. The reboot caused by brute force prevention also erases the keys, then. These are huge assumptions, I know. But I think that getting hung up on cold boot attacks right away oversells the capabilities of a lot of attackers. Poof, the decryption key is gone from memory. Anyway…

There are some options which allow you to continue to have poor password policy. Before secure elements, many ARM platforms would use ARM TrustZone and data fused within the SoC. The TrustZone OS could use the data that only it knows about to generate an encryption key, which it only gives access to when provided with an appropriate PIN. The non-TrustZone OS would not be able to access the key without the PIN, so the encryption key was effectively hidden. This would mean that the data can, in theory, only be decrypted on the device that encrypted it. However, this cannot be done securely on the PinePhone. While your PinePhone operating system could load a TrustZone kernel and perform such a key derivation algorithm, there is nothing stopping the attacker from loading their own TrustZone kernel and doing it too. It would raise the bar for a successful attack, but at the cost of a lot of engineering to build the TrustZone operating system and applications to run on it. It is better to assume that the PinePhone’s SoC can’t reliably hide data from itself without Verified Boot being enabled. So, we need to be a little more creative.

At the base level, all the TrustZone implementation is doing is separating an integral part of the encryption key from the storage containing the data it encrypts. We can do that separation, too. With LUKS in default mode, the information needed to decrypt your disk is stored unencrypted on the same disk it decrypts (except your passcode, which you provide at boot). If an adversary can read this information, their job to crack your encryption passcode becomes much easier. If we moved the decryption information, even just a small piece of it, onto a different storage medium, we would make it essentially impossible to crack your passcode. To achieve this, we can switch LUKS into detached mode.

LUKS detached mode stores the information needed to decrypt the disk on a different storage medium. We only need to decide where to put the detached information. The PinePhone’s eMMC or SD card is a poor location. These storage devices would generally be lost with the PinePhone, so the attacker would have access to them. A USB stick that you carry with you would be an excellent storage location though. You could store the USB stick on a keychain or another object that you’re unlikely to lose with the PinePhone. When you need to decrypt your phone at boot, you plug in your USB stick to provide the decryption information, then provide your passcode to complete the decryption. If the attacker does not have the detached decryption information, they will probably not be able to access your encrypted data. Even if they knew your PIN, the data would be locked away.

If losing the USB stick with the PinePhone poses too much of a risk, you could store the majority of the encryption key for your PinePhone on a smartcard, such as a Yubikey or Nitrokey. These devices run (hopefully) very well-secured operating systems that are unlikely to act outside of their defined boundaries. To use this, LUKS would encrypt some of the information needed to decrypt your disk, then store that encrypted information on the disk. On boot, it would ask your smartcard to decrypt the encryption key stored on the disk. Your smartcard would request a PIN from LUKS to complete this request. Your device shows you a PIN entry prompt, and if you pass the challenge your device gets the decrypted encryption information and can decrypt the disk. You might trust your smartcard to wipe itself after a few incorrect attempts whereas it is impossible to trust the PinePhone to enforce the same boundaries. Now if you lose the PinePhone and the smartcard to the same attacker, you can still be reasonably sure that your data is safe. However, there is a small price paid in control over the solution… your smartcard probably has firmware on it that you can’t change by yourself. It seems like the community has largely moved past smartcard firmware control being a problem, though.

Of course, these solutions are not as convenient as a normal modern smartphone. Needing to put two items together to make one of them work is not ideal. However, they provide a potentially cheap solution to a real hole in security. If your risk profile warrants looking beyond your six-digit phone PIN and you want the control of the PinePhone, these might be great routes for you to take. However, if we wanted to go beyond to make an all-in-one solution, there are options.

The PinePhone has a set of pogo pins on its back that allow peripherals to be attached as modules. These pogo pins provide I2C communication and power to the peripheral. There exist security coprocessors, such as TPM 2.0 compatible devices and other proprietary chips, which talk over I2C. Putting these things together could allow us to hand off creation and storage of a good encryption key to another processor which sits inside the body of the PinePhone. Assuming we could place trust in such a processor to never give up the encryption key and to wipe itself if it has the wrong PIN presented too many times (just like our smartcard), we could very possibly enable the user to have their convenient PIN but maintain a system that’s reasonably secure against a small organization of attackers. Essentially, we embed the smartcard into the PinePhone.

If we are able to implement some of these solutions, you could be reasonably sure that you have maintained control of your PinePhone while protecting it from an attacker who has unlimited physical access to it (in other words, they don’t give it back). The ideas I’ve proposed are nothing new, but they could be applied in novel and user-friendly ways to make them more adoptable by PinePhone owners. Each potential solution has its trade-offs as well, but they generally make up for greater risk of compromise with reduced cost to develop and to purchase hardware for. But let’s jump up to the final boss of any mobile device threat model: the attacker who not only takes your phone, but gives it back without you noticing. That’s right, it’s time to talk about the Evil Maid.

why do I hear boss music?

Every discussion of threat models for mobile devices eventually ends up at one of two conclusions: the Cold Boot attack (which I, uh, gracefully deflected earlier) and the Evil Maid.

The Evil Maid attack goes like this: You are staying in a hotel. Your PinePhone has data on it that an attacker wants very badly. This attacker has the resources and will to engage in clandestine operations against you to gain access to that data. You are great at software security, though. The attacker can’t reasonably compromise your device from afar. The attacker determines that the easiest way to compromise you is to trick you into giving them the data yourself.

The attacker knows the hotel that you are staying in. They bribe the hotel’s maid (or disguise themself as one) to gain access to your hotel room. You leave your PinePhone in your room while you get some breakfast. This grants them physical access to your device for a short period of time. Using this access, the attacker makes a reasonably undetectable modification to the device. For example, they install a piece of malware that acts like your password entry screen. You return to your device after the Evil Maid has left, plug in your smart card, and enter your PIN. The malware takes the decryption information that it now knows and stores it somewhere on the disk. Later, when you leave your phone unattended again, the evil maid can steal it. Now they have full access to your data.

For bonus points, they could develop malware that attaches itself to your operating system somewhere in the boot process and give themselves backdoor access to your computer. They don’t even need to steal the phone then.

The Evil Maid attack applies to any situation where a device may be unattended for a period of time. It could occur at an airport security screening, a library, or even a public restroom. The Evil Maid moniker is also sometimes used when discussing supply chain attacks, where the installation of malicious software is done while the device is transiting between the factory and your house. This supply chain threat is not entirely theoretical, the USA’s National Security Agency is known to do it.

This is one case where Verified Boot is the gold standard to protect against a given threat model.

Verified Boot was discussed at the top of this essay, so there’s no need to reiterate how it works here. Assuming your adversary has the ability to modify your device’s early boot process without your knowledge, Verified Boot should be able to detect the change and stop further execution. That prevents you from decrypting your data, and prevents the attacker from winning.

There has been some discussion on an owner-controlled verified boot process. Hugo Landau discussed this in his blog post, How secure boot and trusted boot can be owner-controlled. Corey Doctorow discussed it twice, once in Lockdown: The coming war on general-purpose computing and again in The Coming Civil War over General Purpose Computing (both are used as background sources on Hugo’s post). There is also an existing project to create a secured boot process with entirely open source, auditable, owner-controlled software named Heads.

Both Hugo and the Heads authors assert one unequivocal truth about booting a system that is resistant to an Evil Maid attack: the system must start from an entry point that we trust in order to continue to be trustworthy. We cannot possibly audit this entry point at runtime: if an attacker compromised the entry point, they could just fake its authenticity to us. The entry point must be unchangeable at runtime in order to be trusted. It is called the root of trust because if it can’t be trusted, the entire system must be assumed compromised.

Hugo’s ideals of an owner-controlled, trustable boot process on today’s commodity hardware sound simple. We start by creating a secure, bug-free bootloader that can be controlled by its owner. This bootloader can somehow verify that any modifications to itself or the software that it is about to boot are approved by the device owner. We sign that bootloader, then place the hash of the key we used to sign it into the SoC’s write-once storage. Now we’ve locked the device to only booting software signed by the new key, which is only our bootloader. Now we throw the key away. The bootloader is now effectively unchangeable because it is the only code that can ever be booted by this SoC. We have created an unchangeable entry point for the system… but in a pretty extreme way. Any security problem in the bootloader will be permanent. That is a supremely uncomfortable position to be in. Also, buyers of a device using such a bootloader would have to trust that the key was actually thrown away.

Heads takes a slightly different approach. It abuses the fact that it must be stored on an SPI flash chip for the computers that it supports to boot it. These flash chips generally have a hardware write-protect pin. When that pin is set to a certain electrical value, it is (hopefully) impossible to overwrite the area of the storage that is write-protected. Therefore, if the owner flashes the storage with a bootloader that they trust to verify the rest of the system, then write-protects that storage, the entire system can be trusted as an extension of that unchangeable area.

Heads’s approach to verifying the boot process is desirable in other ways, too. Like our potential solution to prevent an attacker with physical access from getting access to the data on our PinePhone, Heads uses a TPM 2.0-compliant module to encrypt its disk encryption keys. Like TBBR-style Verified Boot, it uses the checksum of the next piece of software that will be booted to determine what happens next. It then diverges from Verified Boot for a while. Instead of checking that the next code to be loaded matches a given signature and halting boot on an unexpected result, Heads measures every piece of code in the boot process instead (until it begins to boot the Linux kernel, when it switches back to signature verification). The first bootloader creates a checksum of itself and places that information into the computer’s TPM. Then it checksums the next piece of code in the boot process and tells the TPM about it, that piece of code checksums the next, and so on. At the end of this process, the system’s TPM knows about the checksums of all of the code that’s running on the system. Now, Heads asks the TPM to decrypt the disk encryption key. The TPM looks over all of the checksums it knows about. If they match a set of expected values, the TPM gives the decrypted disk encryption key back to Heads. If not, the TPM refuses the operation. Assuming the first bootloader on the system can be trusted (that’s the root of trust), an attacker cannot change the system software without causing the TPM to refuse our decryption request. This entire process is called Measured Boot.

Now, of course, Measured Boot is not perfect. An attacker could snoop on the communications between Heads and the TPM to learn the hash values that the TPM expects, then boot their own software that fakes those values back to the TPM at a later time. This is where adding Verified Boot to Measured Boot could be quite powerful: the attacker can’t boot their own software that fakes the boot measurements because VB will block it. The attacker needs to be able to fake the measurements and pass VB in order to succeed in their attack. That would be a tall order for even the most seasoned attackers. Even when using one of these technologies alone, though, the Evil Maid’s requirements for a successful attack are so high as to prevent most from trying.

There is also the associated problem of “how do I verify that my Heads machine hasn’t been tampered with before I type in my encryption key?” After all, the Evil Maid could have simply stolen your computer and replaced it with one that looks a lot like it, and now only needs to steal your encryption passcode. Once you type in your passcode on the fake computer, the Evil Maid learns of it and is able to decrypt your real computer. This is where Matthew Garrett’s Anti Evil Maid 2 Turbo Edition project, better known as TPMTOTP, comes in to play. I feel like he does a better job of explaining it on his blog, but in a nutshell it uses TOTP to display a six-digit number on screen that changes every thirty seconds. If that number matches the number in your 2-factor authentication app or hardware TOTP token, you’re (probably) safe to unlock! There is also a similar implementation that uses a hardware security key to verify that the computer is yours: plug in your security key and it does a bit of HOTP magic with the computer. If you see the green light on your key, you’re safe to enter your passcode. However, this does not grant any more security over TPMTOTP, it just makes the process more convenient to perform on every boot.

There are a few ways that we might use all of this theory and practical knowledge to verifiably secure our PinePhone against an Evil Maid. Adding a TPM and porting Heads’ Measured Boot process and TPMTOTP to the PinePhone would be a great start. With that, we gain resistance to malware that sneaks in to the early boot process. The attacker would need to develop their malware to take over the measured boot process at its very first entry point, faking the measurements for the rest of the software. That would be quite difficult.

However, such a malware is possible and could be written to your eMMC or SD card with physical access for a very short period of time or a simple root privileges exploit. To take things all the way, we need to make it impossible to take over the boot process as early as possible. Unfortunately, the A64 makes locking down the early boot process rather difficult. It has a hardcoded boot order that loads software from the SD card, then the eMMC, then SPI flash in that order. To our knowledge, this boot order cannot be changed. We only have three main options to secure our boot process:

  1. Remove or physically disable the microSD card slot from the PinePhone so the attacker cannot easily insert a malicious SD card. Write our entry point bootloader to the eMMC storage. Use the features in the eMMC 5.1 standard to write-protect this bootloader on eMMC. If a microSD card is suddenly present in your PinePhone, power it off until it can be safely audited.
  2. Write the entry point bootloader to a microSD card and use the card’s firmware features to enable permanent write protection. Mark the microSD card and slot in some way that makes it evident that they have been tampered with. For example, epoxy the microSD card in place. Hope that the microSD card’s firmware write-protect actually works. If any physical tampering is suspected, power off the PinePhone until it can be safely audited.
  3. Sign the entry point bootloader and place the hash of the signing key in the Verified Boot storage of the PinePhone. This protection seems most resilient: Place whatever storage media you want into the PinePhone. If it’s bootable but unsigned, the A64 will reject it. If it’s bootable and signed, it’s probably the software you trust. We have officially given up on resisting Verified Boot because the attacker is too powerful.

Keep in mind that the idea of these protections is to prevent the attacker from being able to boot code that fakes measurements to the TPM. If they’re trying that, they are quite well-resourced already. All of these protections have trade-offs. The first two require the PinePhone’s owner to be well-trained in how the protection works. All of them place ownership of the device in the party who creates the bootloader and seals it on the storage. With a disabled microSD card slot, re-enabling the slot would allow taking control of the device… but that is also a security risk. With an epoxied and read-only microSD card installed, transferring ownership requires replacing the card slot or the motherboard, depending on how liberally the epoxy has been applied. The easier the replacement is, the more likely it is that a malicious attacker could subvert the protection. And, of course, writing the Verified Boot keys creates all of the social problems that I discussed at the top of the essay. Any of these options would be acceptable for a very knowledgeable device owner to take… but a vendor could not ship them without making themselves the owner of the PinePhone, with the purchaser being nothing more than an authorized user.

Such is the story for building a system when we assume that the attacker has the ability to create and install early-boot malware: either we trust the system from the very beginning, or we do not. Being able to trust the system from the beginning requires very clever engineering to pull off. We have not found a way to make such a clever system without requiring the owner to be just as clever to install it. Our perfect Verified Boot and Measured Boot strategy would require the owner knows how to build a bootloader, create and safely store a signing key, and write that key to the A64. So owners wish to outsource the difficulty to a third party, and now the third party is the real owner of the device. Now the device purchasers realize that they don’t like the third party being in control of their system, so they take on the cleverness of securing their own systems again… This cycle continues.

Going off the rails to bring things back into focus

Of course, even once we’ve gone this far to secure your PinePhone, things still aren’t perfect:

Don’t look at me like that. The possibilities seem endless once our attacker can complete an Evil Maid attack. If your endgame attack vector is someone stealing your device then giving it back to you without you noticing, you are so far away from being safe that you’d need a telescope to even glimpse it. The attack could have been planned far in advance, the attacker could have extensive knowledge about you, and the attacker must really want your data to be able to even attempt to become an Evil Maid.

The eternal truth of computer security holds: once an attacker has physical access to your computer, it’s their computer. If they give the computer back to you, it’s still their computer. Trying to solve this fundamental security hole causes us to spend much time and effort for little tangible benefit… The only sure way to prevent an Evil Maid attack from affecting you is to never lose physical control of your computer or to keep sensitive information off of computers you have to lose physical access to. Any security measures on top of this might delay the inevitable, requiring more hands-on time with the device to successfully complete the attack. They do not change the eternal truth.

The PinePhone’s form factor makes it both easier and harder to be a true beacon of hope against physical access: it’s small enough that you can carry it with you everywhere unlike your laptop. However, it’s small enough that you can easily lose it. Airport security and border checks don’t care either way.

Even though there are an infinite number of ways that an attacker could compromise any system we create, there is still truth to the idea that making things more difficult for them will decrease our risk. The more threats we can mitigate on an out-of-the-box PinePhone, the better.

Overtargeting and underdelivering

In the open source community, I’ve found that we tend to get hung up on minutia, missing the forest because the trees are just so interesting. When discussing security, we tend to start at a simple threat model… then someone suggests an attacker with a few more resources. Then a few more. Soon, you’re back at an attacker who cares enough about the user’s data to perform an Evil Maid attack, one that has full control of the system starting at the bootloader. What started as a small threat modeling session to improve security a bit is now acting in extreme absolutes, and we are paralyzed with fear since we will be unable to solve the problems we’ve created for ourselves in a way that maintains device ownership.

I think it’s important to remember something when we drive into this land of extremes: For almost all of us, No one cares who we are, let alone what data we wish to hide. It is highly unlikely that we will ever be targeted by someone with enough resources to complete an Evil Maid attack. If you are, technical solutions like Measured Boot and Verified Boot still can’t help you. The mafia will lock you in a small cell and beat you with a wrench until you decrypt your laptop willingly. The NSA will attack your phone’s booted operating system rather than faffing about with its bootloaders. If you are up against such an attacker, you are beyond what software alone can do for you. You need better operational security. Your focus should be on not putting yourself or your data in harm’s way in the first place, then on protecting the data that you have to carry with you. Therefore, security does not need to be approached as “if the bootloader isn’t secure, nothing is secure.” It can be approached as “what is the greatest risk to our data?.” With the realization that software doesn’t need to be the answer when an extreme situation is brewing, we can cool down and figure out what we should do first. The forest comes back into focus, since we aren’t so worried about that Fir over there.

We should be able to protect against smartphone thieves who want to earn a quick profit and have unlimited physical access to a device. This is probably the most widespread threat that our devices will suffer. We should be able to use detached LUKS headers or smartcards to protect our encryption keys. We should be able to go a step beyond and integrate this kind of hardware into our devices directly, making it possible for users to have a sleek and convenient device while also conveniently helping them be secure. These options do not give up control of the system: it’s totally possible to wipe the keys from the secured system and start again without any of the previous device owner’s data or control.

We should keep in mind that, barring attacks from thieves, most security attacks will be at the operating system or peripheral level rather than at the bootloader. We should use this knowledge to better target our resources toward isolating applications and system services from each other so they can’t be used to exfiltrate data: confinement and containerization are the name of the game. Done wrong, these technologies get in the way of computing. Done right, even the most discerning Linux user will have trouble noticing them. We’re still in the early days here, our current solutions are still getting in the way of computing. I have hope for the future.

We should continue to evaluate where the greatest risks to our system come from. Once we are able to deliver a system that is reasonably secure against simply stealing the device or hacking the operating system from a distance, we should consider that the next logical step for the attacker may be to compromise the bootloader. Our secure system’s most obvious flaw is the code that boots it… now how do we fix the bootloader?

This approach quickly dismisses some usecases out of hand.

If a user places an equal amount of value on a stolen device and the data on that device, we won’t be able to help them. Remote killswitches which are common in today’s mobile devices and help prevent theft cannot be enforced without a verified boot process. At best, we could prevent a thief from using the original operating system on the stolen device. This is why you see many MacBooks on eBay which have Ubuntu installed: Apple has determined that the device is no longer eligible to operate with macOS. However, their lack of control over the boot process means the computer can still have a bit of use outside of its proprietary home. When thousands of iPads and iPhones are going to landfills every month because their previous owner forgot to sign out of iCloud before giving them away, this doesn’t seem so bad.

If a potential user analyzes their threat model and determines that they need to have a device which is secured against some Evil Maid attacks but they cannot have the knowledge to do so themselves, we won’t be able to help them either. They will need to place their trust in another hardware vendor and hope that it works out on the long term. They could be a journalist reporting on a humanitarian crisis caused by a corrupt regime, or a politician, or someone in control of a nuclear weapons system. Unfortunately, we don’t have the resources to help them yet.

But missing those usecases does not mean that all is lost. General-purpose Mobile Linux distributions do not need to be the be-all, end-all solutions for everyone in the world. For those of us making general-purpose operating systems, it’s okay to replace old insecure designs with new secure ones from the top-down, targeting effort where the greatest risk lies. This is actually required of us, since we do not have the funding to make a 100%, bottom-up secure system. Vendors can appear who use the PinePhone’s hardware to provide a system where the purchaser is a tenant rather than the owner, but which provide bottom-up security assuming the purchaser trusts the vendor. This is the great benefit of the PinePhone in the end: its explorable nature can make it different things to different people.

I think that this diversity is the primary strength of the open source community in general. There can be projects which target the most paranoid, high-risk users who can deal with a bit of inconvenience; and there can be projects which target everyday users with much different threat models. These projects can work together to make something better when it makes sense, and move the world forward in their own separate ways when it doesn’t.