TPM and Secure Boot would be good things if there were no way to prove to third parties that you're using them, or have them configured a certain way (i.e., remote attestation). It's the fact that that is possible that makes them reduce user choice and promote state and corporate surveillance.
Maybe. This assumes I trust Microsoft to have part of my computer where I have no ability to interrogate it to see what they’re doing in there.
If it’s on my computer, I should be allowed to read and write to it. End of story. I don’t care if that makes it vulnerable. So far as I’m concerned, letting Microsoft keep secrets from me on my own computer is similarly catastrophic to losing my HD to a crypto-locker virus.
> TPM and Secure Boot would be good things if there were no way to prove to third parties that you're using them, or have them configured a certain way (i.e., remote attestation).
This is exactly what a TPM was made for, so your statement is a little bit paradoxical.
The ideal is the owner being able to use TPM/SecureBoot/etc to ensure that the device is in the configuration they want. That means resisting tampering, and making any successful tampering become obvious.
The problem is third parties using TPM/SecureBoot/etc as a weapon against the owner via remote attestation, by preventing them from configuring their own device, with the threat of being cut off from critical services.
Having the upside without the downside would be nice, but how could it work? Is a technical solution feasible, or would it need a law/regulation?
Not a crypto expert, but given how both, bad players seeking control and people seeking to verify their cloud machines are both remote it seems that the technology will rollout without problem and will end up being force fed into all consumer devices with bullshit excuses.