Yubico Forum

...visit our web-store at store.yubico.com
It is currently Tue Jan 30, 2018 12:51 pm

All times are UTC + 1 hour




Post new topic Reply to topic  [ 15 posts ]  Go to page 1, 2  Next
Author Message
PostPosted: Wed Dec 17, 2014 11:16 am 
Offline

Joined: Tue Dec 16, 2014 12:39 am
Posts: 10
When using OTP, it's possible for the user to specify their own secret value instead of, or in addition to, the Yubico-supplied one in slot 1. This allows fully private use of the OTP functionality.

It seems this cannot be done for U2F. Each device has a unique "device secret" which is used to encrypt the private key half of all generated keypairs. That's great, but when is that secret key created? Did Yubico generate it in the factory (and, thus, the possibility is open that they have a copy of it) or is it generated on the device when U2F is first activated? If the latter, how can we verify that?

(btw, I raised this question elsewhere, viewtopic.php?f=33&t=1542, but felt a new thread with a clear [QUESTION] would be the right way to get an answer from Yubico).

It has been asserted that changing the key would break the attestation certificate, but those appear to identify batches of genuine YubiKey's, not individual ones (for obvious privacy reasons), so it would appear they don't certify the device secret value itself.

In summary;

* Is the U2F device secret known to Yubico at any stage?
* Is there a reason that we cannot overwrite the device secret with a value of our own making?

Thanks,
B.


Top
 Profile  
Reply with quote  

Share On:

Share on Facebook FacebookShare on Twitter TwitterShare on Tumblr TumblrShare on Google+ Google+

PostPosted: Wed Dec 17, 2014 7:26 pm 
Offline

Joined: Tue Nov 18, 2014 9:14 pm
Posts: 95
Location: San Jose, CA
Just to clarify: There is no mathematical relationship between the attestation key/certificate and the device master secret. The relationship is only a logical one.

I presumed that Yubico would want to "break" the attestation certificate because if you supplied your own device master secret, Yubico could no longer "attest" to the security of the token. You could copy the key, for example, allowing clones. You could set it to all zeros. In either case, I'm sure Yubico would definitely not want to put their seal of approval on your modified token.

I did some more thinking about this, and it would seem that allowing supplying a custom device master secret makes for a huge security hole: Resellers could program their own device master secrets to the tokens, to which they would keep a copy. Consumers would have no idea. Obliterating the original attestation key and cert would help a little, but that wouldn't help services which don't check the attestation certificate.

Again, waiting for Yubico to chime in officially on how they handle the device master secrets and their official position on custom device master secrets. I'm curious about the answers.


Top
 Profile  
Reply with quote  
PostPosted: Wed Dec 17, 2014 8:28 pm 
Offline
Site Admin
Site Admin

Joined: Mon Mar 02, 2009 9:51 pm
Posts: 83
To answer some of your questions:

The master device secrets used to generate the site-specific keys are generated on-chip during the programming part of manufacturing (this is also when the attestation certificates are loaded), using the devices RNG. The secret never leaves the device. Let me clearly state that at no point do we ever see these master secrets, and we cannot extract any private keys from any key handles. Just as darco says here and in the old topic (viewtopic.php?p=6542#p6542), the reason this key isn't settable by the user is that the whole attestation part requires this. The attestation certificate attests to the fact that the key signed by the certificate come from a particular type of device, correctly adhering to the U2F protocol with safeguards in place to prevent device cloning. There is simply no way to do this if the private keys can be re-created outside the device.

In the end it boils down to trust. We are quite open with how our U2F keys are generated, so you can verify that the algorithms used are sound for yourself. You cannot verify that we are truthful, but if you assume that we are lying, then you must also assume that we could potentially take steps to intentionally compromise the security of our devices. Under this assumption, we could do any number of things, such as leak the master key through the ECDSA signatures, or even have a backdoor into the device by sending some secret command. Both of these could be done even with a user supplied device secret.

We could potentially have a command to re-generate the secret on-chip, and still be able to trust attestation. We might do this in the future as a means of "wiping" a device. The reason we have not yet done so is that it potentially introduces new attack vectors (like unwillingly wiping someones device, destroying the credentials). This does NOT however do anything to prevent us from leaking information in the above described way, so you still need the same level of trust.

We could even take more steps to make the whole process of key generation verifiable by the user. I've spent some time thinking about the problem, and have devised a scheme where the user can verify that user-provided entropy has gone into the creation of a random device secret, and that that secret is used for the creation of each key pair and MAC, without giving the user direct access to the device secret or the private keys. However, this approach falls short, as the R value used in the ECDSA signatures cannot be verified to not contain a backdoor without direct access to the private key. Thus, this approach only serves to add a bunch of complexity to the key generation, with no real benefit (and was thus shelved). The bottom line is, you have to place some trust in the device vendor, there will always be room for secret backdoors.

As for OTP vs U2F: U2F imposes guards against several attack vectors that OTP does not. OTP by its nature is simpler and does not require the same type of client support as U2F does. We aren't saying that one is superior to the other, we're offering both and hope to help you choose which one suits your needs the best.


Top
 Profile  
Reply with quote  
PostPosted: Wed Dec 17, 2014 8:38 pm 
Offline

Joined: Tue Dec 16, 2014 12:39 am
Posts: 10
Thanks for the detailed response, I consider the question answered.

If I might impose on your time a little longer, I'm not sure it's true that, when using OTP with a user-defined secret, that you could leak that secret in the way you can via the ECDSA signature for U2F. Every bit of the OTP is dependent on every bit of the user-defined secret (AES being what it is). Additionally, it can be verified that the USB device makes no other communication. It seems I don't have to trust you quite as much in that situation (of course I still have to trust that the device is tamper-resistant and cannot easily be convinced to read out the secret value itself). How wrong am I?


Top
 Profile  
Reply with quote  
PostPosted: Wed Dec 17, 2014 8:53 pm 
Offline
Site Admin
Site Admin

Joined: Mon Mar 02, 2009 9:51 pm
Posts: 83
To clarify our stance on custom device secrets: We will not allow user-settable secrets with our standard attestation certificates. We might consider selling "un-programmed" devices where a custom certificate and secret can be loaded, but there would have to be a valid business case for us to do so.

To answer rnewsons question: No, I don't see any immediate way of leaking the AES key via the OTPs, but I haven't given it that much thought. a As for back doors in the form of secret commands, yes, you could constantly monitor the USB traffic and be somewhat sure that there's no unwanted communication, though you might want to also monitor for any wireless transmissions as well. I suppose it might be possible to leak some information in the timing of the keystrokes, but this would obviously only be detectable on the local machine. The point you're making, that the type of passive information leakage that would be possible with a malicious U2F device isn't possible with OTP, is as far as I know correct.


Top
 Profile  
Reply with quote  
PostPosted: Wed Dec 17, 2014 9:00 pm 
Offline

Joined: Tue Dec 16, 2014 12:39 am
Posts: 10
It occurs to me the device could include key bits in the lo/hi timestamp or the nonce, though in my situation the only devices receiving the OTP's would be my own anyway.

Thanks for indulging my questions.

B.


Top
 Profile  
Reply with quote  
PostPosted: Wed Dec 17, 2014 9:23 pm 
Offline

Joined: Tue Nov 18, 2014 9:14 pm
Posts: 95
Location: San Jose, CA
A few quick question for dain:

How is the source code to the U2F app audited? Is it only reviewed internally, or is it independently reviewed?

What steps have been taken to ensure that the binary that is loaded onto the secure element at the factory is in fact a binary that was compiled from the audited sources? Is the compilation of an audited revision of the sources (confirmed cryptographically, say, using a git commit hash) witnessed by multiple individuals, with a hash of the binary confirmed at the factory?

Just checking to see how far down the rabbit hole goes...


Top
 Profile  
Reply with quote  
PostPosted: Thu Dec 18, 2014 8:47 am 
Offline
Yubico Team
Yubico Team

Joined: Wed Aug 06, 2014 2:40 pm
Posts: 38
rnewson wrote:
It occurs to me the device could include key bits in the lo/hi timestamp or the nonce, [...]

The nonce and timestamp is generated by the client, which has no knowledge of (and thus cannot leak) the key.


Top
 Profile  
Reply with quote  
PostPosted: Thu Dec 18, 2014 11:38 am 
Offline

Joined: Tue Dec 16, 2014 12:39 am
Posts: 10
Hm? The lo/hi timestamp and nonce (for OTP, remember) are clearly generated by the device, there's no client input at that stage of the protocol (you tap, get an OTP code, which contains lo/hi/nonce). It does raise the question of where it gets the entropy for the nonce, actually. From my read of your code, none of those values are used in validation anyway (the server verifies that the first 6 decoded bytes are the private identifier and that the CRC is valid and that the counter+use fields are equal or higher to the last recorded values).

I mention it only as a theoretical way for a malicious OTP token to leak the key bits as I earlier asserted that a leak couldn't happen in theory. It seems it could, through those fields. Again, only the devices that decode the OTP's would be able to scoop up those bits anyway.

I think you were referring to the other (optional) nonce used to validate the http request/response when talking to a validation server.


Top
 Profile  
Reply with quote  
PostPosted: Thu Dec 18, 2014 11:40 am 
Offline

Joined: Tue Dec 16, 2014 12:39 am
Posts: 10
and my theoretical leak is a bit moot as the validation server itself would necessarily know the full key anyway. :oops:


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 15 posts ]  Go to page 1, 2  Next

All times are UTC + 1 hour


Who is online

Users browsing this forum: No registered users and 2 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB® Forum Software © phpBB Group