Trusting Trezor without trusting SatoshiLabs

Hi,

I ordered Trezor few days ago directly from shop.trezor.io. Before I ordered I did quite a lot of research and I am confident the device is secure and probably the best we can do to protect ourselves against 3rd parties. However I still didn’t quite catch how can we be sure if we don’t trust SatoshiLabs. Here is what I understood so far so please let me know if this makes sense and if I miss something (good or bad).

So there is the bootloader and firmware. They are both open source and we can compile the code and once we compile it we can get hash signature for the binary. When we first get the device it is without firmware and with bootloader from SatoshiLabs which is write protected. Write protection is achieved by MPU as hardware has a bug in write only part. So write protection is also part of the bootloader (not sure if this is still the case in 2021).
So when I get my new Trezor I connect it to USB and open SatoshiLabs application which detects it and writes the latest firmware to it. Bootloader is doing the write and it checks the signature of the firmware. When firmware is loaded it checks the signature of bootloader and only if both are ok, device proceeds with the boot without warning.
Now some questions:

  1. Bootloader doesn’t know the signature of future firmwares because it is read only. It only checks that firmware is signed by SatoshiLabs private key, that doesn’t guarantee us it is the same as the one in source code repository since binary hash is not compared. So basically it only guarantees the firmware comes from SatoshiLabs and we can check if the firmware is the same as source code one by verifying it’s hash before uploading it. Does this sound ok?
  2. When firmware checks if bootloader is ok does it have some code in it which it compares current device bootloader hash with all bootloader hashes released so far? If there is such code then we could compile our version of bootloader and check if it’s hash is in the firmware code and be sure it is ok. This assumes we can also check the code of the firmware which calculates the hash of current device bootloader which is I guess also available in the source?
  3. When I want to update my device can I first download firmware and check its signature and then upload that same firmware to my device? If that’s correct than above points should guarantee that I am sure I have booth bootloader and the firmware as it is in source code even if I don’t trust SatoshiLabs.

Also I guess things like bootloader saving firmware in another memory block and when booting actually starting hidden firmware are all guaranteed not to happen since we have the source code and if we can 100% be sure they are actual binaries in the device then there is no fear (hopefully it is guaranteed by above points). Makes me wonder how can anyone trust any hardware device which has any part of the code non open source, including Ledger.

Sorry if this post is too long but if someone explains a bit it would be useful to everyone who has similar suspicions about trusting the manufacturer.

Thanks

Hello,

First of all, you can’t be buying hardware from SatoshiLabs if you don’t trust SatoshiLabs at least a little. Even if we ignore nation-state level stuff like building a custom chip that looks identical to a STM32, there’s a lot of things the pre-loaded software can do to mess with your verification attempts.

The only true way around this issue is to buy components off the shelf and assemble a custom T1 at home.

For now, let’s assume that the fact that everything is legit if you build a custom T1 indicates that it works the same on an official T1 (in other words, SatoshiLabs is not actively malicious and is not subverting its own firmwares on its own hardware).

Bootloader is not actually signed separately. The firmware however has a list of known bootloader hashes, i.e., for every firmware version, the bootloader must be in a hard-coded list of known bootloaders.

This btw means that you can’t install too old firmware on a too new bootloader, because it will not recognize the bootloader and fail.

Correct.

Exactly. The code is here.

You can do this, but it’s kind of pointless. The firmware always contains its own bootloader and replaces the installed one with the firmware one. If you trust the firmware that it does the check right, you can also trust it that it will do the replacement right.

If you want to be extra thorough, you could check out bootloaders from the versions in known_bootloader, build them, and verify that the hashes match. This might however prove difficult, because historically the build was not entirely reproducible. Results can depend on compiler version, which was not properly pinned, and so trying to get the same result might be hit-and-miss.

Sure.

You can. If you use trezorctl from git of the firmware repository, there are even commands that do it:

trezorctl fw download
trezorctl fw verify trezor-1.10.3.bin
trezorctl fw update -f trezor-1.10.3.bin

(In the released version, there is only a single command firmware-update that does all three at once.)

Oh, no, that is very much not guaranteed. For starters, there is no guarantee that the pre-installed bootloader actually installs the firmware that you upload. What it could do instead is extract the version number and report it next time you start the Trezor.

As pointed out at start of my post, there is really a lot that the pre-installed software can do to trick you.

A relatively good way to verify that there is no hidden firmware is to build your own firmware that is exactly as big as the T1 memory, filled with random garbage, and when you boot it up, it shows the hash of itself on the screen. It’s technically possible for a hidden software to show you the right result and still remain malicious in the background, but we’re solidly pushing the attacker into the realm of nation-states.

3 Likes

Thank you for your detailed reply and the code links. I understand most of the stuff but one thing I didn’t quite get:

According to SatoshiLabs (https://blog.trezor.io/trezor-one-firmware-update-1-6-1-eecd0534ab95) bootloader was supposed to be write only (so even firmware can’t update it) however due to hardware bug they made a fix in firmware 1.6.1 which should restrict bootloader memory using MPU and after that firmware is uploaded and updates the bootloader, no other firmware should be able to overwrite bootloader because of the MPU mapping. If that’s not the case and any firmware can overwrite bootloader even after 1.6.1 that’s completely different than what SatoshiLabs claims about bootloader being read only (they mention it as one of very important safety measures).

Since bootloader is open source and we can build it and generate it’s hash, can’t we guarantee it doesn’t do any malicious stuff providing we can dump bootloader and generate it’s hash? And if we can guarantee that and also see the code of firmware (and know what address is starting execution point for the chip), there is nothing else left to check to be sure it is safe (except as you mentioned if the chip is custom made so that it executes code differently than original or for example executes first some code in memory outside bootloader and the firmware).

Sorry for more questions, but I am just trying to understand better, and hopefully with few more posts it will be more clear.

Thanks

I’m going to clear this up, because the blog post in question doesn’t really explain how the protections work in the first place.

Here’s the key piece:

Secondly, as the bootloader write-protection by STMicroelectronics is flawed, we supplemented it with write-protection enforced by the MPU (Memory Protection Unit): Only a firmware signed by SatoshiLabs is allowed to modify sensitive parts of the memory.

A thing the blogpost fails to mention is that the STM32F205 chip starts in a privileged mode that can control the write protection options. Before jumping into firmware, the bootloader has the option to drop out of privileged mode.
So yes, there is write protection on the bootloader, but the bootloader will allow a properly signed firmware to disable this protection in order to update the bootloader.

(Otherwise you could never again update the bootloader, which might be a problem. E.g., in case there is another flaw found in the bootloader.)

The flaw in question meant that even though everything was set up right, an unsigned firmware could unlock the write-protection too.

In conclusion: no, any firmware can’t overwrite the bootloader. Only a signed firmware can do that.

True, but this is a problem, because generally you can’t dump the bootloader.

The primary way to dump memory is to write code that, e.g., sends you memory contents over USB, and then upload this code to Trezor. As I already mentioned, you have no guarantee that your code is actually running on Trezor. A nefarious pre-loaded software could, for example, install your firmware and then replace all pointers to bootloader with pointers to some other memory area. Then when the firmware executes, it dumps something (presumably a good copy of the official bootloader), and everything will look right to you.

What you can do is to think of ways a malicious loader could mess with the firmware, and then write a firmware that is resistant to these techniques. One thing that comes to mind that would be very difficult to mess with: build a firmware image that is exactly as big as the chip’s memory, and it’s encrypted, with a small run-time decryption loader at start. The encryption means that your image will not compress, so the attacker has no space left to hide things – and also that it is pretty much impossible to on-the-fly patch the contents of the image.

(The other way to dump memory is to attach a debugger to the chip and execute the Kraken hack. This will allow you to inspect the bootloader and see that it’s the right one. It will also destroy your Trezor.)

Thank you for the information. So conclusion is that basically we need to trust any hardware wallet manufacturer in order to use a hardware wallet. Makes me wonder why is then even having open source firmware/bootloader advertised as advantage in this regard. If we cannot verify them anyway (as you clearly explained above) then them being open source does not help at all with trusting the manufacturer. I am now not even sure if it’s more trustable to have offile phone + open source wallet vs hardware wallet. As in that setup I can trust manufacturer (Apple, Samsung for example) and for the software I can 100% verify it’s code (I know how to do this already). I know many won’t agree but I think I can trust more brand new Apple phone having only open source wallet on it and connected to internet only when I want to move money out of wallet (or even just sign with QR without internet) then any hardware wallet. At this point everyone starts saying how exposure to internet opens many “attack vectors”, etc. but it is literary impossible that iPhone connected just for updates and when transferring money is hacked at that specific point in time. Maybe if you are billionaire and someone targets you directly but otherwise it is impossible. However if you are billionaire hardware wallet can be targeted too anyway.
Also taking all this into account I don’t think I could entrust my savings to any hardware wallet. I think only way to go is with multisig, however I research about that and currently technology is not yet mature enough for this to work seamlessly. I will open another question about this on the forum with more details.
Anyway thank you very much for all the input, you cleared a lot question marks I had.

I would agree with this.

The fact that the manufacturer is making everything open-source, including the bill of materials, etc., is a strong indication that they have nothing to hide. It is always possible for the manufacturer to screw you over, but if they wanted to do that, why make it so difficult on themselves in the first place? It would be much easier to not publish the sources.

Also there’s still the option of building your own Trezor clone from off-the-shelf parts, for full verifiability and the nice cold wallet feeling.

That’s not true. E.g., if an exploit exists, it might get in your iPhone when downloading the transaction history (which you need to do before you send money out), and the outgoing transaction will already be modified to send everything to the attacker.
Nobody will be sitting behind their keyboard, waiting for you specifically to connect. Instead, the exploit payload will be placed at the server that you use to download transactions.

In general, I understand why you’d be unsatisfied with the level of verification available. However, I disagree with your conclusions.

What is your threat model? Who are the attackers in your scenario? Why would you conclude, from the fact that you can’t 100% verify everything, that the manufacturer is inherently untrustworthy?

Would you personally audit the source code of your iPhone-based wallet? If not, do you trust more the author(s) of that wallet, and a security research community around it, or do you trust a manufacturer of a hardware wallet (whose business is directly tied to the security of their product) and a security research community around it?

Nothing is Truly Unhackable™ and it must come down to trust at some point. The fact that a significant body of security research says that a solution is secure, does not mean that an exploit does not exist – but it is a strong indicator in that direction, or perhaps a measure of difficulty of finding an exploit. Even if you do your own security audit, you will never have 100% confidence. You could have missed something.

Personally, I believe that for a consumer, using a hardware wallet (from any of the established manufacturers) is significantly safer than using a hot wallet of any kind. For significant (millions USD and more) crypto savings, relying on a single wallet is irresponsible and some sort of multisig solution should definitely be used, ideally a combination of hot and cold wallets from multiple manufacturers.

Well making it difficult pays off by getting more trust from users. And amount of money in play is so big that any difficulty would be gladly accepted by anyone malicious.

First this is only related to actual software wallet I use. Given that software wallets are open source with thousands of contributors and have passed test of time and millions of users, this is so low probability that I would gamble on it before gambling on trusting each employee in some remote company is trusted. Also code which downloads transactions is not related to private key and it would need to be so wrong for this to happen that anyone with mediocre coding knowledge would spot it immediately. As I said this code is modified and checked by thousands of developers and that is so unlikely to happen that I could sleep well if that’s the only problem I can have.

Because he has billions of reasons (coins) to do so and laws are not yet defined in this area so basically he can get away with it as many new coin projects get away every day.

I don’t need to verify the code it self (and I can if I want) as developer community does it every day. Important is that I know 100% what’s running on my Phone is the code which is checked and I can do it very simply on new Linux virtual machine in an hour or two. With hardware wallet I simply cannot know that what is running on my wallet is what is in the source code repo.

I agree, however this is currently so complicated that people are already losing their funds because of unpolished technology. For example until recently hardware wallets weren’t able to let you confirm public keys when approving transactions. Also people are loosing money because in 2 of 3 multisig they think that 2 private keys are enough to reclaim their money when they actually also need all 3 xpubs (which is awkward to store to say the least). And there are different formats of xpubs confusing everything even more. For example if you want to create read only multisig wallet on Electrum and try to enter xpub from the exactly the same software (Electrum) it by default will not accept it because one is Zpub and another zpub. Not to mention making it all work together with hot wallet + hardware wallet and having right storing strategy. I am not saying it’s impossible, people with even small amounts of money are doing it, but there is a risk in setting up everything correctly when in fact this should work out of the box. It reminds me of time when people were loosing their wallet contents because of change address misunderstanding before deterministic wallets existed.
Anyway, I am still waiting for my Trezor, it’s stuck in the customs (hopefully will be released soon). When I get it, what I want is to setup 2 of 3 with Trezor + 2 hot wallets. Do you have any suggestions regarding this or should I move this to another topic? Default tutorial shared by SatoshiLabs is very limited and exploring only multiple hardware wallets setups. Also it doesn’t mention important information like xpub issues I mentioned above (and some which may exist and I don’t know about them).

Thanks

This is assuming that (a) this malicious actor is patient, because they first need to build up a lot of trust, only to burn it all in one glorious moment, and (b) that openness is that strong a factor for attracting customers.

The experience with Nigerian prince scams points to the opposite: you are much more likely to be successful stealing from people naive enough to fall for obvious ploys.

True if there was a deliberate backdoor. What is much more likely is a buffer overflow in parsing the incoming transaction data, which is difficult to notice when reviewing the code, but when someone does notice, it allows arbitrary code execution (i.e., the “parse transaction” piece of code is modified on-the-fly to retrieve the private key).

This is also the method iPhones were exploited in the past and continue to be exploited in the wild. In the recent NSO Group hack, it was enough to receive a malformed message.

Also don’t forget that any app on your iPhone could in theory be exploited, pivot to full system access, and then steal data from the crypto wallet. If you have a dedicated iPhone that is literally never connected to a network, and communicates with your PC via QR codes, it is essentially a cold wallet. Otherwise not really.

The main HWW developers (Trezor, Ledger) are legitimate businesses in the European Union, where laws against stealing very much exist, and seem to cover cryptocurrency holdings just fine.

This is very true. I would generally recommend a paid service to help with this. I’ll mention Casa because it’s the only one that I know, but I am sure that others exist.

If you want to discuss multisig options, I suggest creating a separate topic, because it might attract attention of other users on this forum who might have their own experience.

Buffer overflow which causes possibility that attacker can actually access encrypted key and unencrypt it is almost 0. Also most security “holes” related to iPhone lately are so specific and hard to accomplish that it is almost impossible for that to happen to average Joe. When did you have hack last time which affected iPhones in a way that attacker could access encrypted storage and even extract a key from there? Also I was lately doing security checks on some Docker containers and in official Linux images and C++ libs there are tons of buffer overflow issues which stay there and millions of companies are still using these exact images and libs as executing any of those is very complicated or almost impossible and even when done it mostly just crashes the app.
Moreover buffer overflow can be sent to hardware device too over USB as part of signing request back and forth provided that client app is hacked.
If there are indeed, in near past, examples of vulnerabilities in phones where attacker could actually access private key of any hot wallet please give me some links. I am talking here stock + only wallet app. If someone installs bunch of screensaver apps from unknown developers that doesn’t count. I don’t think there was ever such a case in the history of hot wallets.

I don’t think any company will put on their website big red letters reading “We stole your money”. Even if this happens it would be presented as vulnerability or hack done by a third party. It’s basically impossible to prove otherwise. And laws regarding cryptos in EU and US are still not good enough as for example pancakeswap deleted users investment (they told it was hack) and they had no responsibility whatsoever (you can read it on the net regarding their Syrup staking, etc.). This is only one example and there are hundreds.

Anyway, I received my Trezor today (yoo-hoo), so although I sound negative regarding hardware wallets, it’s more about discussing why they are more secure (if they really are) and clearing some questions I have. Too many people are just saying use hardware wallets without discussion and keep repeating “attack vector” and “connected to internet FUD” as parrots.
Don’t misunderstand my replies in that regard, I was not trying to argue but was just trying to really understand the advantages by disagreeing with you. I am bit disappointed because there is no way to trust manufacturer of the device. Mostly because it complicates things a lot especially as setting multisig is a pain.
I will start using my Trezor with smaller amounts and after some time I hope to find a good and simple way to setup multisig with hot wallet + Trezor.