more schemes
[Imported from Trac: page NewMutableEncodingDesign, version 3]
parent
e5874c99d9
commit
19ebf92202
|
@ -69,6 +69,10 @@ Adding 2 metadata characters and a clear separator gives us:
|
||||||
* 288: `tahoe:MR-11DriaxD9nipA10ueBvv5uoMoehvxgPerpQiXyvMPeiUUdtf6`
|
* 288: `tahoe:MR-11DriaxD9nipA10ueBvv5uoMoehvxgPerpQiXyvMPeiUUdtf6`
|
||||||
* 384: `tahoe:MR-3a31SqUbf8fpWE1opRCT3coDhRqTU7bDU2AvC3RQJBu6ZNFhVscyxA9slYtPVT79x`
|
* 384: `tahoe:MR-3a31SqUbf8fpWE1opRCT3coDhRqTU7bDU2AvC3RQJBu6ZNFhVscyxA9slYtPVT79x`
|
||||||
|
|
||||||
|
[#217:c44](http://allmydata.org/trac/tahoe/ticket/217#comment:44) says that,
|
||||||
|
if we don't need to prevent collisions, then we can use a K-bit hash for
|
||||||
|
K-bit second-pre-image resistance.
|
||||||
|
|
||||||
# Design Proposals
|
# Design Proposals
|
||||||
|
|
||||||
## Commonalities
|
## Commonalities
|
||||||
|
@ -107,7 +111,7 @@ nicely in the ["StorageSS08" paper](http://allmydata.org/~zooko/lafs.pdf)
|
||||||
* (1K) writecap = K-bit random string (perhaps derived from user-supplied
|
* (1K) writecap = K-bit random string (perhaps derived from user-supplied
|
||||||
material) (remember, K=kappa, probably 128bits)
|
material) (remember, K=kappa, probably 128bits)
|
||||||
* (2K) readcap = 2*K-bit semiprivate key
|
* (2K) readcap = 2*K-bit semiprivate key
|
||||||
* (2K) verifycap = 2*K-bit public key
|
* verifycap = 2*K-bit public key
|
||||||
* storage-index = truncated verifycap
|
* storage-index = truncated verifycap
|
||||||
|
|
||||||
On each publish, a random salt is generated and stored in the share. The data
|
On each publish, a random salt is generated and stored in the share. The data
|
||||||
|
@ -126,7 +130,7 @@ Like above, but create two levels of semiprivate keys instead of just one:
|
||||||
* (1K) writecap = K-bit random string
|
* (1K) writecap = K-bit random string
|
||||||
* (2K) readcap = 2*K-bit first semiprivate key
|
* (2K) readcap = 2*K-bit first semiprivate key
|
||||||
* (2K) traversalcap = 2*K-bit second semiprivate key
|
* (2K) traversalcap = 2*K-bit second semiprivate key
|
||||||
* (2K) verifycap = 2*K-bit public key
|
* verifycap = 2*K-bit public key
|
||||||
* storage-index = truncated verifycap
|
* storage-index = truncated verifycap
|
||||||
|
|
||||||
The dirnode encoding would use H(writecap) to protect the child writecaps,
|
The dirnode encoding would use H(writecap) to protect the child writecaps,
|
||||||
|
@ -142,9 +146,9 @@ is the pubkey, and that can't be used to protect the data because it's public
|
||||||
current (discrete-log DSA) mutable file structure, and merely move the
|
current (discrete-log DSA) mutable file structure, and merely move the
|
||||||
private key out of the share and into the writecap:
|
private key out of the share and into the writecap:
|
||||||
|
|
||||||
* (1K) writecap = K-bit random string
|
* (1K) writecap = K-bit random string = privkey
|
||||||
* (3K) readcap = H(writecap)[:K] + H(pubkey)
|
* (3K) readcap = H(writecap)[:K] + H(pubkey)
|
||||||
* (2K) verifycap = H(pubkey)
|
* verifycap = H(pubkey)
|
||||||
* storage-index = truncated verifycap
|
* storage-index = truncated verifycap
|
||||||
|
|
||||||
In this case, the readcap/verifycap holder is obligated to fetch the pubkey
|
In this case, the readcap/verifycap holder is obligated to fetch the pubkey
|
||||||
|
@ -159,15 +163,16 @@ resistance. The verifycap is 2*K.
|
||||||
Or, if the pubkey is short enough, include it in the cap rather than
|
Or, if the pubkey is short enough, include it in the cap rather than
|
||||||
requiring the client to fetch a copy:
|
requiring the client to fetch a copy:
|
||||||
|
|
||||||
* (1K) writecap = K-bit random string
|
* (1K) writecap = K-bit random string = privkey
|
||||||
* (3K) readcap = H(writecap)[:K] + pubkey
|
* (3K) readcap = H(writecap)[:K] + pubkey
|
||||||
* (2K) verifycap = pubkey
|
* verifycap = pubkey
|
||||||
* storage-index = H(pubkey)
|
* storage-index = H(pubkey)
|
||||||
|
|
||||||
I think ECDSA pubkeys are 2*K long, so this would not change the length of
|
I think ECDSA pubkeys are 2*K long, so this would not change the length of
|
||||||
the readcaps. It would just simplify/speed-up the download process. If we
|
the readcaps. It would just simplify/speed-up the download process. If we
|
||||||
could use shorter hashes, then the H(pubkey) design might give us slightly
|
could use shorter pubkeys, this design might give us slightly shorter keys.
|
||||||
shorter keys.
|
Alternately, if we could use shorter hashes, then the H(pubkey) design might
|
||||||
|
give us slightly shorter keys.
|
||||||
|
|
||||||
### add traversalcap
|
### add traversalcap
|
||||||
|
|
||||||
|
@ -175,8 +180,67 @@ Since a secure pubkey identifier (either H(pubkey) or the original privkey)
|
||||||
is present in all caps, it's easy to insert arbitrary intermediate levels. It
|
is present in all caps, it's easy to insert arbitrary intermediate levels. It
|
||||||
doesn't even change the way the existing caps are used:
|
doesn't even change the way the existing caps are used:
|
||||||
|
|
||||||
* (1K) writecap = K-bit random string
|
* (1K) writecap = K-bit random string = privkey
|
||||||
* (3K) readcap = H(writecap)[:K] + H(pubkey)
|
* (3K) readcap = H(writecap)[:K] + H(pubkey)
|
||||||
* (3K) traversalcap: H(readcap)[:K] + H(pubkey)
|
* (3K) traversalcap: H(readcap)[:K] + H(pubkey)
|
||||||
* (2K) verifycap = H(pubkey)
|
* verifycap = H(pubkey)
|
||||||
* storage-index = truncated verifycap
|
* storage-index = truncated verifycap
|
||||||
|
|
||||||
|
## Shorter readcaps
|
||||||
|
|
||||||
|
To make the readcap shorter, we must give up something, like complete
|
||||||
|
server-side validation and complete offline attenuation.
|
||||||
|
|
||||||
|
* (1K) writecap = K-bit random string = privkey
|
||||||
|
* (1K) readcap = H(writecap)[:K]
|
||||||
|
* storage-index = H(readcap)
|
||||||
|
* verifycap = storage-index + pubkey
|
||||||
|
|
||||||
|
The readcap is used as an HMAC key, and the share contains (inside the signed
|
||||||
|
block) an HMAC of the pubkey. The readcap is also hashed with the per-publish
|
||||||
|
salt to form the AES key with which the actual data is encrypted.
|
||||||
|
|
||||||
|
The writecap begets the readcap, and the readcap begets the storage-index, so
|
||||||
|
both writers and readers will be able to find the shares, and writecaps can
|
||||||
|
be attenuated into readcaps offline. Wally the writecap-holder can generate
|
||||||
|
the pubkey himself and not use (or validate) the value stored in the share.
|
||||||
|
But Rose the readcap-holder must first retrieve the (pubkey,HMAC) pair and
|
||||||
|
validate them, then she can use the pubkey to validate the rest of the share.
|
||||||
|
|
||||||
|
Wally can generate the verifycap offline, but Rose cannot, since she has to
|
||||||
|
fetch the pubkey first.
|
||||||
|
|
||||||
|
The verifycap must contain a copy of the pubkey (or its hash), because the
|
||||||
|
storage-index is not usable to validate the pubkey (the HMAC doesn't help,
|
||||||
|
because it is keyed on the readcap, which is unavailable to the Val the
|
||||||
|
verifycap-holder). And it must contain a copy of the storage-index, because
|
||||||
|
the pubkey is insufficient to generate it.
|
||||||
|
|
||||||
|
The storage-index must be derived from the readcap, not the pubkey, because
|
||||||
|
the pubkey is too long to get into the readcap, and Rose the readcap-holder
|
||||||
|
must have some way of getting the storage-index.
|
||||||
|
|
||||||
|
The server can check the signature against the embedded pubkey, but has no
|
||||||
|
way to confirm that the embedded pubkey is correct, because the validatable
|
||||||
|
binding between pubkey and storage-index is only available to Rose. You could
|
||||||
|
copy the verifycap into the share, but there's no cryptographic binding
|
||||||
|
between it and the storage index. You could put a copy of the storage-index
|
||||||
|
in the signed block, but again that doesn't prove that the storage-index is
|
||||||
|
the right one. Only a scheme in which the storage-index is securely derived
|
||||||
|
from the pubkey will give the desired property.
|
||||||
|
|
||||||
|
Another possibility is to have a 2K-long readcap and put K bits of a pubkey
|
||||||
|
hash in it. That would look like:
|
||||||
|
|
||||||
|
* (1K) writecap = K-bit random string = privkey
|
||||||
|
* (1K) storage-index = H(pubkey)[:K]
|
||||||
|
* (2K) readcap = H(writecap)[:K] + storage-index
|
||||||
|
* verifycap = storage-index
|
||||||
|
|
||||||
|
This "half-verifycap" approach restores full offline attenuation, and gives
|
||||||
|
the server 1K bits of validation, but reduces Val the verifycap-holder's
|
||||||
|
validation bits in half (from 2K to 1K). A full verifycap, H(pubkey), could
|
||||||
|
be generated offline by Wally, or by Rose after fetching the pubkey. You
|
||||||
|
still need the HMAC on the pubkey to give Rose 2K confidence that she's got
|
||||||
|
the right pubkey: the storage-index only gives 1K.
|
||||||
|
|
||||||
|
|
Loading…
Reference in a new issue