Ed25519-based mutable files -- fast file creation, possibly smaller URLs #217

Open
opened 2007-11-30 04:03:54 +00:00 by zooko · 67 comments

Mole2 and The_8472 and I had a conversation on IRC which leads to the following ideas. These are late-night-sleepy and fresh ideas, so they may be holey.

To create a new mutable file choose a random seed r, and use it to produce a public/private key pair (for concreteness, think DSA, so your private key is a just the random 256-bit number r and the public key is just g^r^ mod a big prime, let's say maybe 4096-bits).

Now let the symmetric encryption key k be the secure hash of the public key!

Now encrypt the file and upload it. Now encrypt the public key and upload it. Now if you give someone k they can read and verify. If you give them r they can read and write (sign).

Let the "location index" be derived from the read-capability.

To squeeze r you can pick a smaller random number for r, maybe 128-bits and use a secure hash to expand it to 256-bits. This is cryptographically questionable, but it may be worth asking those questions in order to get really nice "printable capability" lengths, as well as a pleasing simplicity of crypto structure.

Mole2 and The_8472 and I had a conversation on IRC which leads to the following ideas. These are late-night-sleepy and fresh ideas, so they may be holey. To create a new mutable file choose a random seed *r*, and use it to produce a public/private key pair (for concreteness, think [DSA](http://en.wikipedia.org/wiki/Digital_Signature_Algorithm), so your private key is a just the random 256-bit number *r* and the public key is just *g*^*r*^ mod a big prime, let's say maybe 4096-bits). Now let the symmetric encryption key *k* be the secure hash of the public key! Now encrypt the file and upload it. Now encrypt the public key and upload it. Now if you give someone *k* they can read and verify. If you give them *r* they can read and write (sign). Let the "location index" be derived from the read-capability. To squeeze *r* you can pick a smaller random number for *r*, maybe 128-bits and use a secure hash to expand it to 256-bits. This is cryptographically questionable, but it may be worth asking those questions in order to get really nice "printable capability" lengths, as well as a pleasing simplicity of crypto structure.
zooko added the
c/code
p/major
t/enhancement
v/0.7.0
labels 2007-11-30 04:03:54 +00:00
zooko added this to the undecided milestone 2007-11-30 04:03:54 +00:00
Author

Hm. I actually don't know which is better: DSA where my private key is the sha-256 hash of my 128-bit seed, or DSA where my private key is merely my 128-bit seed.

For best long-term security I would really rather that my private key is 256-bits, but those bits would be generated from a deterministic random number generator anyway, so they would basically be the SHA-256 of something, and I'm skeptical that the practical entropy of the something can ever really exceed 128 bits. So the former is definitely okay, but the latter is a tad simpler, and makes it clear to the cryptologist that the DSA private key has at most 128-bits of entropy in it...

Hm. I actually don't know which is better: DSA where my private key is the sha-256 hash of my 128-bit seed, or DSA where my private key is merely my 128-bit seed. For best long-term security I would really rather that my private key is 256-bits, but those bits would be generated from a deterministic random number generator anyway, so they would basically be the SHA-256 of *something*, and I'm skeptical that the practical entropy of the something can ever really exceed 128 bits. So the former is definitely okay, but the latter is a tad simpler, and makes it clear to the cryptologist that the DSA private key has at most 128-bits of entropy in it...
Author

I take that back -- maybe x really ought to be 256-bits when it is used as the exponent. So no problem -- just use SHA-256 to make it into 256-bits.

I take that back -- maybe *x* really ought to be 256-bits when it is used as the exponent. So no problem -- just use SHA-256 to make it into 256-bits.
Author

Okay, I couldn't sleep. I realized, while trying to fall asleep, that my idea had indeed had a hole in it -- I'd neglected verify-cap. The design I posted in the initial description of this ticket didn't allow us to give someone the ability to verify ciphertext without also giving them the ability to decrypt.

I've run up against this problem before when trying to figure out how to go down to one crypto-value in caps instead of two crypto-values, and I was chagrined to have forgotten about it when I posted this idea.

But then, just as I started falling asleep, I had a brainstorm about how you can have separate read-write, read-only, and verify caps, while having only a single crypto-value in the read-write cap and a single crypto-value in the read-only cap. So I got up to write it down lest I forget.

Starting with the design described in the initial description of this ticket, consider the problem that knowledge of the public key -- the verification key which is necessary for verification, implies knowledge of the symmetric secret key. This means that we can't give verification authority without also giving read authority. To fix this, we want the symmetric key to be something which isn't derivable from the verification key. This suggests that we have to include this symmetric key, or a crypto-value which can be used to generate the symmetric key, in the read-cap. But the read-cap also requires a proof that the verification key is correct. In our current implementation (0.7.0) (see [mutable.txt]source:docs/mutable.txt, the read-cap contains both the symmetric key and the hash of the verification key.

Okay, so this was as far as I had gotten before -- I couldn't think of a way for there to be a single crypto-value which served both to prove the correctness of the verification key and to provide the symmetric data-decryption key, without being derivable from the verification key alone.

But just now I thought of this:

Create a new mutable file as described above, except do not set the symmetric encryption key k equal to the secure hash of the verification key. So now you have your initial seed r which is also the read-write-cap, and you have the public key, let's call it u. (For concreteness, imagine that we're using DSA and u = g^H(r)^%p, where g and p are system-wide values and H() is SHA-256 .)

To generate the verify-cap v, let v = H(u).

To generate the read-only cap, generate a 128-bit salt, s with a secure hash of r. Let the read-cap be H(s, v). Now encrypt s using the read-cap and store the ciphertext of s on the storage servers.

Now given only the read-cap, you download the verification key v and the salt s, decrypt them as needed and check that your read-cap = H(s, v). This proves that v is correct, and it makes the read-cap unguessable, even to someone who knows v and the encryption of s.

So now the read-only-cap can be a single crypto-value, and the read-write-cap can be a single crypto-value, and we can separately permit read-write, read-only, and verify.

Note that the read-only-cap can be generated from the read-write-cap purely locally -- without having to fetch any information from the storage servers -- which means that the storage index can be derived from the read-only-cap.

The verify-cap will have to include the storage index as well as v. Therefore, these caps when embedded into tahoe hyperlinks can look like this:

http://localhost:8123/MW_upyf5nwrpccqw4f53hiidug96h

http://localhost:8123/MR_o859qp5btcftknqdppt66y1rxy

http://localhost:8123/MV_7j97kjz7zm3mdwsbr8n35oafpr4gjsn9665marzgunpf43nnzz8y

(That's with 128 bits for the storage index in the verify-cap -- I don't imagine verify-caps are important to pass around among users the way read-only caps and read-write caps are.)

This is very satisfying! We get the "http://localhost:8123" trick, the "MW_"/"MR_" tags, the human-friendly base-32 encoding, and we still have URLs that are small enough to be not quite so "intimidating" to users who are considering sharing them with text tools.

Note that since the crypto-values are 128-bits long, and 26 chars of base-32 encoding holds 130 bits, we have two extra bits to play with. It wouldn't hurt to redundantly encode the type tags, in case users lose or mangle the "MW_"/"MR_" tags. (For example when I double-click on the cap in XEmacs it selects only the base-32 portion -- it treats the underscore as a word separator.)

Okay, I couldn't sleep. I realized, while trying to fall asleep, that my idea had indeed had a hole in it -- I'd neglected verify-cap. The design I posted in the initial description of this ticket didn't allow us to give someone the ability to verify ciphertext without also giving them the ability to decrypt. I've run up against this problem before when trying to figure out how to go down to one crypto-value in caps instead of two crypto-values, and I was chagrined to have forgotten about it when I posted this idea. But then, just as I started falling asleep, I had a brainstorm about how you *can* have separate read-write, read-only, and verify caps, while having only a single crypto-value in the read-write cap and a single crypto-value in the read-only cap. So I got up to write it down lest I forget. Starting with the design described in the initial description of this ticket, consider the problem that knowledge of the public key -- the verification key which is necessary for verification, implies knowledge of the symmetric secret key. This means that we can't give verification authority without also giving read authority. To fix this, we want the symmetric key to be something which isn't derivable from the verification key. This suggests that we have to include this symmetric key, or a crypto-value which can be used to generate the symmetric key, in the read-cap. But the read-cap also requires a proof that the verification key is correct. In our current implementation (0.7.0) (see [mutable.txt]source:docs/mutable.txt, the read-cap contains both the symmetric key and the hash of the verification key. Okay, so this was as far as I had gotten before -- I couldn't think of a way for there to be a *single* crypto-value which served both to prove the correctness of the verification key and to provide the symmetric data-decryption key, without being derivable from the verification key alone. But just now I thought of this: Create a new mutable file as described above, except do not set the symmetric encryption key *k* equal to the secure hash of the verification key. So now you have your initial seed *r* which is also the read-write-cap, and you have the public key, let's call it *u*. (For concreteness, imagine that we're using DSA and *u* = *g*^*H(r)*^%*p*, where *g* and *p* are system-wide values and *H()* is SHA-256 .) To generate the verify-cap *v*, let *v* = *H(u)*. To generate the read-only cap, generate a 128-bit salt, *s* with a secure hash of *r*. Let the read-cap be *H(s, v)*. Now encrypt *s* using the read-cap and store the ciphertext of *s* on the storage servers. Now given only the read-cap, you download the verification key *v* and the salt *s*, decrypt them as needed and check that your read-cap = *H(s, v)*. This proves that *v* is correct, and it makes the read-cap unguessable, even to someone who knows *v* and the encryption of *s*. So now the read-only-cap can be a single crypto-value, and the read-write-cap can be a single crypto-value, and we can separately permit read-write, read-only, and verify. Note that the read-only-cap can be generated from the read-write-cap purely locally -- without having to fetch any information from the storage servers -- which means that the storage index can be derived from the read-only-cap. The verify-cap will have to include the storage index as well as *v*. Therefore, these caps when embedded into tahoe hyperlinks can look like this: `http://localhost:8123/MW_upyf5nwrpccqw4f53hiidug96h` `http://localhost:8123/MR_o859qp5btcftknqdppt66y1rxy` `http://localhost:8123/MV_7j97kjz7zm3mdwsbr8n35oafpr4gjsn9665marzgunpf43nnzz8y` (That's with 128 bits for the storage index in the verify-cap -- I don't imagine verify-caps are important to pass around among users the way read-only caps and read-write caps are.) This is very satisfying! We get the "<http://localhost:8123>" trick, the "MW_"/"MR_" tags, the human-friendly base-32 encoding, and we *still* have URLs that are small enough to be not quite so "intimidating" to users who are considering sharing them with text tools. Note that since the crypto-values are 128-bits long, and 26 chars of base-32 encoding holds 130 bits, we have two extra bits to play with. It wouldn't hurt to redundantly encode the type tags, in case users lose or mangle the "MW_"/"MR_" tags. (For example when I double-click on the cap in XEmacs it selects only the base-32 portion -- it treats the underscore as a word separator.)
zooko changed title from even cleverer public key crypto for mutable files to better crypto for mutable files -- small URLs, fast file creation 2007-11-30 06:51:57 +00:00
Author

Hm. Waking up this morning I remember one other issue. I couldn't find it in the trac tickets, but Brian has mentioned that he wants to make the storage index be derived from the public key so that storage servers can protect against a DoS attack in which someone sees a storage index that you are using, and then goes and uploads a different public key into that storage index on storage servers.

The scheme described in this ticket has the storage index derivable from the read-write-cap (the private key a.k.a. the signing key), and also derivable from the read-only-cap (which is derived from a combination of the public key and s -- the secret read-key salt). So the storage index cannot be derived from purely public information.

Hm. Waking up this morning I remember one other issue. I couldn't find it in the trac tickets, but Brian has mentioned that he wants to make the storage index be derived from the public key so that storage servers can protect against a DoS attack in which someone sees a storage index that you are using, and then goes and uploads a different public key into that storage index on storage servers. The scheme described in this ticket has the storage index derivable from the read-write-cap (the private key a.k.a. the signing key), and also derivable from the read-only-cap (which is derived from a combination of the public key and *s* -- the secret read-key salt). So the storage index cannot be derived from purely public information.

We have a protocol designed for this.. I'll attach it here. I've started the conversion work, it's about 50% complete. It requires new code in pycryptopp to expose the DSA library, of course.

We have a protocol designed for this.. I'll attach it here. I've started the conversion work, it's about 50% complete. It requires new code in pycryptopp to expose the DSA library, of course.
warner changed title from better crypto for mutable files -- small URLs, fast file creation to DSA-based mutable files -- small URLs, fast file creation 2007-12-31 21:04:07 +00:00

Attachment mutable.txt (36692 bytes) added

new DSA-based mutable-file protocol

**Attachment** mutable.txt (36692 bytes) added new DSA-based mutable-file protocol

Attachment mutable-DSA.png (87717 bytes) added

quick sketch of the new crypto scheme

**Attachment** mutable-DSA.png (87717 bytes) added quick sketch of the new crypto scheme
Author

See discussion about key lengths on the tahoe-dev list.

See [discussion about key lengths](http://allmydata.org/pipermail/tahoe-dev/2008-January/000314.html) on the tahoe-dev list.

When we do this, visit #306 and take advantage of the compatibility break to clean up hash tag names.

When we do this, visit #306 and take advantage of the compatibility break to clean up hash tag names.

also visit #308 (maybe add "directory traversal caps" for deep-verify), which would probably require the addition of another layer of read-cap into the mutable file structure.

also visit #308 (maybe add "directory traversal caps" for deep-verify), which would probably require the addition of another layer of read-cap into the mutable file structure.

so, I think I see where the #308 "traversal cap" would fit. We define another
cap in between the read-cap and the storage-index. The first 192 bits of the
traversal cap is the hash of the first 192 bits of the read-cap (which is
itself derived from the hash of the salt and the pubkey hash). The last 64
bits of the traversal cap is the hash of the last 64 bits of the read-cap
(which is derived from the hash of the pubkey hash). The storage index is
then composed of the 64-bit hash of the first 192 bits of the traversal cap,
and the 64-bit hash of the last 64 bits of the traversal cap. The verify cap
remains the same: the first 64 bits of the storage index, plus the 256-bit
pubkey hash.

Then we define three separate AES encryption keys. The most powerful one is
the hash of the write-cap, and is used in dirnodes to encrypt the read-write
child caps. The middle one is the hash of the read-cap, and is used to
encrypt the read-only child cap and the filename. The weakest one is the hash
of the traversal cap, and is used to encrypt the traversal-cap (for
directories) or the verify-cap (for files).

We'd need to enhance the MutableFileNode interface to expose these
keys:

  • get_writekey()
  • get_readkey()
  • get_traversalkey() (or maybe get_deepverifykey())
  • get_readcap()
  • get_deepverifycap()
  • get_verifycap()

We could actually define an arbitrary number of intermediate capabilities,
although of course there's no point in doing so unless we define how they
should be used ahead of time.

The main downsides to this scheme are complexity and a slight performance hit
(we need an extra two hashes to get from the write-cap or read-cap to the
storage index).

so, I think I see where the #308 "traversal cap" would fit. We define another cap in between the read-cap and the storage-index. The first 192 bits of the traversal cap is the hash of the first 192 bits of the read-cap (which is itself derived from the hash of the salt and the pubkey hash). The last 64 bits of the traversal cap is the hash of the last 64 bits of the read-cap (which is derived from the hash of the pubkey hash). The storage index is then composed of the 64-bit hash of the first 192 bits of the traversal cap, and the 64-bit hash of the last 64 bits of the traversal cap. The verify cap remains the same: the first 64 bits of the storage index, plus the 256-bit pubkey hash. Then we define three separate AES encryption keys. The most powerful one is the hash of the write-cap, and is used in dirnodes to encrypt the read-write child caps. The middle one is the hash of the read-cap, and is used to encrypt the read-only child cap and the filename. The weakest one is the hash of the traversal cap, and is used to encrypt the traversal-cap (for directories) or the verify-cap (for files). We'd need to enhance the `MutableFileNode` interface to expose these keys: * get_writekey() * get_readkey() * get_traversalkey() (or maybe get_deepverifykey()) * get_readcap() * get_deepverifycap() * get_verifycap() We could actually define an arbitrary number of intermediate capabilities, although of course there's no point in doing so unless we define how they should be used ahead of time. The main downsides to this scheme are complexity and a slight performance hit (we need an extra two hashes to get from the write-cap or read-cap to the storage index).

Attachment mutable-DSA.2.png (104412 bytes) added

new scheme, with deep-traversal cap

**Attachment** mutable-DSA.2.png (104412 bytes) added new scheme, with deep-traversal cap

I added a picture of the enhanced scheme, with deep-verify/traversal caps.

One lingering question.. I don't remember why I wanted the last 64 bits of
the read-cap/deep-verify-cap/storage-index to be a hash of the last 64 bits
of the stronger cap, rather than simply a direct copy. If it were a copy,
then the tail of the read-cap would just be the last 64 bits of the pubkey
hash, and the exact same 64 bits would be put in the other caps. This would
save some CPU time when deriving the subordinate caps.

I don't think there were any security properties that resulted from this
being a hash instead of a copy.. the public key is public knowledge, so the
pubkey hash is as well. We aren't mixing in any other material for those
hashes (otherwise the storage server could not check that the slot has the
right pubkey, by hashing the pubkey it finds there and comparing it against
the tail of the SI).

I added a picture of the enhanced scheme, with deep-verify/traversal caps. One lingering question.. I don't remember why I wanted the last 64 bits of the read-cap/deep-verify-cap/storage-index to be a hash of the last 64 bits of the stronger cap, rather than simply a direct copy. If it were a copy, then the tail of the read-cap would just be the last 64 bits of the pubkey hash, and the exact same 64 bits would be put in the other caps. This would save some CPU time when deriving the subordinate caps. I don't *think* there were any security properties that resulted from this being a hash instead of a copy.. the public key is public knowledge, so the pubkey hash is as well. We aren't mixing in any other material for those hashes (otherwise the storage server could not check that the slot has the right pubkey, by hashing the pubkey it finds there and comparing it against the tail of the SI).

At hackfest4 last night, Zooko, Ping, and I came up with a new scheme that
takes advantage of the smaller keys (mainly the small public key) available
with elliptic-curve DSA. In discrete-log DSA, the private key is 256 bits,
but the public key is the full length of the modulus, probably 2048 bits. In
EC-DSA (as I understand it), the two keys are the same length, and 192 bits
would be plenty .

So in the new scheme:

  • the write-cap is just a 192-bit DSA signing key, randomly generated
    • zooko says that with sufficient coaxing of the crypto++ PRNG seed, we
      could generate this safely and repeatably from a smaller seed, say 96
      bits, and then get smaller write-caps.
  • the write-cap is hashed to form a 96-bit encryption secret. This provides
    the confidentiality
  • the read-cap is the 96-bit encryption secret and the 192-bit DSA verifying
    key. This is 288 bits long.
  • whatever subordinate caps we want are formed by hashing the encryption
    secret and concatenating the result to the verifying key
  • the storage index is the 128-bit hash of the verifying key

The data encryption keys are:

  • write-key: hash of the write-cap
  • read-key: hash of the read-cap
  • deep-verify key: hash of a subordinate cap
  • shallow-verify key: hash of a subordinate cap

There are a small number of caps that are meant to be shared by humans over
text links (IM, email, etc). These are the ones that we want to keep small.
Since we only really need maybe 3 or 4 of these, we assign each of these a
single-letter prefix, like:

  • "D": read-write directory
  • "d": read-only directory
  • "F": immutable file (still longer than we want)

The human-shared-text caps then look like:

  • write-cap: <http://127.0.0.1:8123/D18GeGYBSLAodrPuS4mUzWgrQkt2tlkxsZ>
  • read-cap: <http://127.0.0.1:8123/d2mR5OSt7yuf1ymLgvPxQdJFBGPUEU7uRzBL9CTM5lYwh0KCwG>

We still need to store a copy of the verifying key in the share, so that the
storage server can use it (optionally) to verify the signatures. The server
can hash the verifying key and confirm that it matches the storage index.

I'm starting to think that sharing files and directories should be done at a
slightly higher level than a raw read-cap. More details in #152.

At hackfest4 last night, Zooko, Ping, and I came up with a new scheme that takes advantage of the smaller keys (mainly the small public key) available with elliptic-curve DSA. In discrete-log DSA, the private key is 256 bits, but the public key is the full length of the modulus, probably 2048 bits. In EC-DSA (as I understand it), the two keys are the same length, and 192 bits would be plenty . So in the new scheme: * the write-cap is just a 192-bit DSA signing key, randomly generated * zooko says that with sufficient coaxing of the crypto++ PRNG seed, we could generate this safely and repeatably from a smaller seed, say 96 bits, and then get smaller write-caps. * the write-cap is hashed to form a 96-bit encryption secret. This provides the confidentiality * the read-cap is the 96-bit encryption secret and the 192-bit DSA verifying key. This is 288 bits long. * whatever subordinate caps we want are formed by hashing the encryption secret and concatenating the result to the verifying key * the storage index is the 128-bit hash of the verifying key The data encryption keys are: * write-key: hash of the write-cap * read-key: hash of the read-cap * deep-verify key: hash of a subordinate cap * shallow-verify key: hash of a subordinate cap There are a small number of caps that are meant to be shared by humans over text links (IM, email, etc). These are the ones that we want to keep small. Since we only really need maybe 3 or 4 of these, we assign each of these a single-letter prefix, like: * "D": read-write directory * "d": read-only directory * "F": immutable file (still longer than we want) The human-shared-text caps then look like: * write-cap: `<http://127.0.0.1:8123/D18GeGYBSLAodrPuS4mUzWgrQkt2tlkxsZ>` * read-cap: `<http://127.0.0.1:8123/d2mR5OSt7yuf1ymLgvPxQdJFBGPUEU7uRzBL9CTM5lYwh0KCwG>` We still need to store a copy of the verifying key in the share, so that the storage server can use it (optionally) to verify the signatures. The server can hash the verifying key and confirm that it matches the storage index. I'm starting to think that sharing files and directories should be done at a slightly higher level than a raw read-cap. More details in #152.

Oops, that got formatted a bit weird:

The read-cap is 72 characters long. The write-cap is 56 characters long (and could potentially be way shorter if we use the PRNG trick, like 40 characters (with prefix) or 16 characters (without).

Also note that this still depends upon having shared parameters. We're still trying to work out what these are in EC-DSA: zooko tnow tells me that they are independent of the curve (or at least, you pick a curve, then you pick the parameters, then you pick the key). We'll either need to choose one set of parameters for all Tahoe installations, or somehow build them into the hypothetical "grid identifier", or have the introducer tell everybody about them, or something.

Oops, that got formatted a bit weird: * write-cap (192b): <http://127.0.0.1:8123/D18GeGYBSLAodrPuS4mUzWgrQkt2tlkxsZ> * read-cap (288b): <http://127.0.0.1:8123/d2mR5OSt7yuf1ymLgvPxQdJFBGPUEU7uRzBL9CTM5lYwh0KCwG> The read-cap is 72 characters long. The write-cap is 56 characters long (and could potentially be way shorter if we use the PRNG trick, like 40 characters (with prefix) or 16 characters (without). Also note that this still depends upon having shared parameters. We're still trying to work out what these are in EC-DSA: zooko tnow tells me that they are independent of the curve (or at least, you pick a curve, then you pick the parameters, then you pick the key). We'll either need to choose one set of parameters for all Tahoe installations, or somehow build them into the hypothetical "grid identifier", or have the introducer tell everybody about them, or something.
Author

I thought Ping had suggested that capital letter meant "write authority", so it would be:

  • D: read-write directory
  • d: read-only directory
  • f: immutable file

Also potentially:

  • F: read-write mutable file

?

Leaving open how to spell "read-only mutable file"...

I thought Ping had suggested that capital letter meant "write authority", so it would be: * D: read-write directory * d: read-only directory * f: immutable file Also potentially: * F: read-write mutable file ? Leaving open how to spell "read-only mutable file"...

Zooko told me today that all the parameters are built-in to the curve that we select. NIST has been kind enough to generate and publish a couple of relevant ones: 128, 160, 192, 224 bits.

So we don't have to try and figure out how to distribute any "shared parameters": these are all specified by the curve. The private key is indeed simply a random number between 1 and 2^n-1. There are still a few things we need in pycryptopp before we can complete this work:

  • serialize private key to just the private exponent (and not the parameters)
  • deserialize private key
  • serialize public key to just the public point

And an item which will let us shrink the write-caps even further:

  • deterministic generation of private key from small seed

Zooko has put some tickets on the pycryptopp trac instance at http://allmydata.org/trac/pycryptopp/report/1

Zooko told me today that all the parameters are built-in to the curve that we select. NIST has been kind enough to generate and publish a couple of relevant ones: 128, 160, 192, 224 bits. So we don't have to try and figure out how to distribute any "shared parameters": these are all specified by the curve. The private key is indeed simply a random number between 1 and 2^n-1. There are still a few things we need in pycryptopp before we can complete this work: * serialize private key to just the private exponent (and not the parameters) * deserialize private key * serialize public key to just the public point And an item which will let us shrink the write-caps even further: * deterministic generation of private key from small seed Zooko has put some tickets on the pycryptopp trac instance at <http://allmydata.org/trac/pycryptopp/report/1>
Author

Note that deterministic generation of private key from small seed also enables an unspeakable "Password Based Public Key Encryption" idea. We shall not speak of it.

* [pycryptopp #3](http://allmydata.org/trac/pycryptopp/ticket/3) (serialize ecdsa keys without the fluff) * [pycryptopp #2](http://allmydata.org/trac/pycryptopp/ticket/2) (deterministic generation of private key from small seed) Note that deterministic generation of private key from small seed also enables an unspeakable "Password Based Public Key Encryption" idea. We shall not speak of it.

Jed Donnelley (from the cap-talk list) suggested that it would be useful to have shallow read-only caps on dirnodes, such that the holder could modify any children, but could not modify the dirnode itself. To accomplish this, we'd want another layer of key, in between the write-cap and the read-cap. I'm not sure if this will fit into our new DSA design as well as it would have in the RSA design, but I suspect there is room for it, especially if zooko's "shmublic" key idea works out.

Jed Donnelley (from the cap-talk list) suggested that it would be useful to have shallow read-only caps on dirnodes, such that the holder could modify any children, but could not modify the dirnode itself. To accomplish this, we'd want another layer of key, in between the write-cap and the read-cap. I'm not sure if this will fit into our new DSA design as well as it would have in the RSA design, but I suspect there is room for it, especially if zooko's "shmublic" key idea works out.

Jed says:

> Why would I want a shallow read-only directory capability?  One example
> is to manage a project with other colleagues who I trust with write
> access to some of the underlying objects.  I can manage the project by
> choosing what to put into the shallow read-only directory (including
> whether some of the pieces are writable, shallow read-only, or deep
> read-only capabilities to directories) - nobody who I give it to can
> modify it - but everybody who I give the shallow read-only capability
> to can extract what's in it and write to that which I choose to share
> write access.
Jed says: ``` > Why would I want a shallow read-only directory capability? One example > is to manage a project with other colleagues who I trust with write > access to some of the underlying objects. I can manage the project by > choosing what to put into the shallow read-only directory (including > whether some of the pieces are writable, shallow read-only, or deep > read-only capabilities to directories) - nobody who I give it to can > modify it - but everybody who I give the shallow read-only capability > to can extract what's in it and write to that which I choose to share > write access. ```
Author

My concern about this is not the implementation costs but the cost of increasing the cognitive load for users.

In Tahoe currently, the number of access control concepts that you have to learn in order to be confident that you can understand the possibilities is somewhat small: caps (identifiers which are also access control tokens), immutable files, mutable files, directories, deep-read-only access to directories.

If we added shallow-read-only-caps-to-directories then this would reduce the number of people who become sufficiently familiar with Tahoe that they feel confident predicting what can and cannot happen with various uses of it. This is a high cost to pay, so I would support it only if the payoff were similarly high. I don't yet understand why Jed's use case can't be solved pretty well with the current access control tools.

This sounds like the kind of use case that Zandr's dad has. I sure would like to see some documentation of their needs...

My concern about this is not the implementation costs but the cost of increasing the cognitive load for users. In Tahoe currently, the number of access control concepts that you have to learn in order to be confident that you can understand the possibilities is somewhat small: caps (identifiers which are also access control tokens), immutable files, mutable files, directories, deep-read-only access to directories. If we added shallow-read-only-caps-to-directories then this would reduce the number of people who become sufficiently familiar with Tahoe that they feel confident predicting what can and cannot happen with various uses of it. This is a high cost to pay, so I would support it only if the payoff were similarly high. I don't yet understand why Jed's use case can't be solved pretty well with the current access control tools. This sounds like the kind of use case that Zandr's dad has. I sure would like to see some documentation of their needs...

Yeah. I suppose we could implement the cap internally, but not make it particularly obvious in the UI, and then make it more accessible later when we figure out how to explain it safely.

Yeah. I suppose we could implement the cap internally, but not make it particularly obvious in the UI, and then make it more accessible later when we figure out how to explain it safely.
Author

Hm... This is interesting. I'm not sure that this approach would work to make more people confident about understanding the possible consequences of their actions. Indeed it might undermine their confidence or lead them to be falsely confident!

Consider, for example, if you are writing a program using the current Tahoe API, and someone is going to hand your program a capability to a directory. You write your program so that it queries the cap to determine whether it is read-write or read-only, and then your program takes different actions about how to share this directory with others depending on what sort of cap it is.

Now, if we release a new version of Tahoe with shallow-read-only caps to directories, then what should a shallow-read-only cap answer when queried about whether it is a read-write cap? Obviously it is not a read-write cap, but the program might be inferring that by answering "false" that it is claiming to be a deep-read-only cap.

It seems like any way we do it could cause a program that worked and was secure with Tahoe v1 to work and have a security hole with that new version of Tahoe. So, perhaps such directories would have to be a new type, so that programs written to the Tahoe v1 API cannot use the new directories at all. Then a program that worked and was secure with Tahoe v1 would fail safely if someone passes it a new dir.

In general, I doubt that we can deploy additional access control semantics without raising the amount of study necessary to become a confident programmer. A programmer should want to understand the whole access control mechanism, so undocumented or optional features come with a cost.

This is not to say that we shouldn't add shallow-read-only directories. Perhaps they are so useful that they are worth the cost.

Hm... This is interesting. I'm not sure that this approach would work to make more people confident about understanding the possible consequences of their actions. Indeed it might undermine their confidence or lead them to be falsely confident! Consider, for example, if you are writing a program using the current Tahoe API, and someone is going to hand your program a capability to a directory. You write your program so that it queries the cap to determine whether it is read-write or read-only, and then your program takes different actions about how to share this directory with others depending on what sort of cap it is. Now, if we release a new version of Tahoe with shallow-read-only caps to directories, then what should a shallow-read-only cap answer when queried about whether it is a read-write cap? Obviously it is not a read-write cap, but the program might be inferring that by answering "false" that it is claiming to be a deep-read-only cap. It seems like any way we do it could cause a program that worked and was secure with Tahoe v1 to work and have a security hole with that new version of Tahoe. So, perhaps such directories would have to be a new type, so that programs written to the Tahoe v1 API cannot use the new directories at all. Then a program that worked and was secure with Tahoe v1 would fail safely if someone passes it a new dir. In general, I doubt that we can deploy additional access control semantics without raising the amount of study necessary to become a confident programmer. A programmer should want to understand the whole access control mechanism, so undocumented or optional features come with a cost. This is not to say that we shouldn't add shallow-read-only directories. Perhaps they are so useful that they are worth the cost.
warner added
c/code-mutable
and removed
c/code
labels 2008-04-24 23:27:14 +00:00
Author

See also #102 (smaller and prettier directory URIs).

See also #102 (smaller and prettier directory URIs).

this isn't going to happen for 1.1.0

this isn't going to happen for 1.1.0
warner modified the milestone from 1.1.0 to undecided 2008-05-09 00:11:05 +00:00
Author

Note that it is elliptic curve cryptography which allows us to have public keys that are a mere 192-bits in size and are still secure.

Note that it is elliptic curve cryptography which allows us to have public keys that are a mere 192-bits in size and are still secure.
Author

I've been blogging with tiddly-wiki-on-top-of-tahoe. here's my blog. Almost everytime I give someone the URL to my blog, they say something about awful the URL is. :-(

I'm getting sick of hearing about it.

<zooko> Please read my blog: 
        http://tahoebs1.allmydata.com:8123/uri/URI:DIR2-RO:hgvn7nhforxhfxbx3nbej53qoi:yhbnnuxl4o2hr4sxuocoi735t6lcosdin72axkrcboulfslwbfwq/wiki.html 
                                                                        [13:12] 
<wiqd> er, no ?                                                         [13:17] 
<zooko> Okay. 
<PenguinOfDoom> zooko: That URL is utterly terrifying                   [13:30] 
<PenguinOfDoom> zooko: is it a thing that you made up with tahoe?       [13:31] 
<zooko> PoD: I know.  -(                                                [13:32] 
<zooko> http://allmydata.org/trac/tahoe/ticket/217 # DSA-based mutable files 
        -- small URLs, fast file creation                               [13:33] 
<arkanes_> that's "small" is it?                                        [13:34] 
I've been blogging with tiddly-wiki-on-top-of-tahoe. [here's my blog.](http://tahoebs1.allmydata.com:8123/uri/URI:DIR2-RO:hgvn7nhforxhfxbx3nbej53qoi:yhbnnuxl4o2hr4sxuocoi735t6lcosdin72axkrcboulfslwbfwq/wiki.html) Almost everytime I give someone the URL to my blog, they say something about awful the URL is. :-( I'm getting sick of hearing about it. ``` <zooko> Please read my blog: http://tahoebs1.allmydata.com:8123/uri/URI:DIR2-RO:hgvn7nhforxhfxbx3nbej53qoi:yhbnnuxl4o2hr4sxuocoi735t6lcosdin72axkrcboulfslwbfwq/wiki.html [13:12] <wiqd> er, no ? [13:17] <zooko> Okay. <PenguinOfDoom> zooko: That URL is utterly terrifying [13:30] <PenguinOfDoom> zooko: is it a thing that you made up with tahoe? [13:31] <zooko> PoD: I know. -( [13:32] <zooko> http://allmydata.org/trac/tahoe/ticket/217 # DSA-based mutable files -- small URLs, fast file creation [13:33] <arkanes_> that's "small" is it? [13:34] ```
Author

Here's today's mockery of my blog's URL, from Wes Felter:

zooko wrote:

Hi WMF!
I read your blog today. Here is my new one:
http://tahoebs1.allmydata.com:8123/uri/URI:DIR2RO:hgvn7nhforxhfxbx3nbej53qoi:yhbnnuxl4o2hr4sxuocoi735t6lcosdin72axkrcboulfslwbfwq/wiki.html

Dude, that URL is crazy; the price of being a cypherpunk I guess.

I really want to implement this ticket. I intend to prioritize #331 (add DSA to pycryptopp - serialize pubkeys with less fluff) right after my StorageSS08 paper and new checker.

Here's today's mockery of my blog's URL, from Wes Felter: > zooko wrote: > > Hi WMF! > > I read your blog today. Here is my new one: > > <http://tahoebs1.allmydata.com:8123/uri/URI:DIR2RO:hgvn7nhforxhfxbx3nbej53qoi:yhbnnuxl4o2hr4sxuocoi735t6lcosdin72axkrcboulfslwbfwq/wiki.html> > > Dude, that URL is crazy; the price of being a cypherpunk I guess. I really want to implement this ticket. I intend to prioritize #331 (add DSA to pycryptopp - serialize pubkeys with less fluff) right after my StorageSS08 paper and new checker.
Author

Please see http://allmydata.org/~zooko/lafs.pdf -- Figure 2 shows the current mutable file crypto structure. Figure 3 shows what it would look like to use ECDSA and semi-private keys, as described in this ticket. Figure 3 is much simpler than Figure 2.

Please see <http://allmydata.org/~zooko/lafs.pdf> -- Figure 2 shows the current mutable file crypto structure. Figure 3 shows what it would look like to use ECDSA and semi-private keys, as described in this ticket. Figure 3 is much simpler than Figure 2.
Author

I mentioned this ticket as one of the most important-to-me improvements that we could make in the Tahoe code: http://allmydata.org/pipermail/tahoe-dev/2008-September/000809.html

I mentioned this ticket as one of the most important-to-me improvements that we could make in the Tahoe code: <http://allmydata.org/pipermail/tahoe-dev/2008-September/000809.html>
Author

added the diagrams from my paper: source:docs/mut.svg and source:docs/proposed/mutsemi.svg

added the diagrams from my paper: source:docs/mut.svg and source:docs/proposed/mutsemi.svg
Author

Here's the latest in my collection of mockery and suspicion for having such a long, ugly URL:

<zooko> Here's my blog which mentions it: 
<zooko> 
        http://testgrid.allmydata.org:3567/uri/URI:DIR2-RO:j74uhg25nwdpjpacl6rkat2yhm:kav7ijeft5h7r7rxdp5bgtlt3viv32yabqajkrdykozia5544jqa/w\
iki.html 
<TigZ> erm 
<Defiant-> longest url award 
<TigZ> is it just me or is that URL a little odd 
<CVirus> LOL                                                            [13:30] 
<TigZ> smells a bit spammy 
<cjb> zooko: yeah, what's up with that?  :) 
Here's the latest in my collection of mockery and suspicion for having such a long, ugly URL: ``` <zooko> Here's my blog which mentions it: <zooko> http://testgrid.allmydata.org:3567/uri/URI:DIR2-RO:j74uhg25nwdpjpacl6rkat2yhm:kav7ijeft5h7r7rxdp5bgtlt3viv32yabqajkrdykozia5544jqa/w\ iki.html <TigZ> erm <Defiant-> longest url award <TigZ> is it just me or is that URL a little odd <CVirus> LOL [13:30] <TigZ> smells a bit spammy <cjb> zooko: yeah, what's up with that? :) ```
Author

The next step on this ticket is to write up a proof of security of the scheme. George Danezis and Ian Goldberg's recent work on "Sphinx" might be a good model to follow, as they used a nearly identical construction to achieve rather different security properties :-) http://eprint.iacr.org/2008/475.pdf Loosely speaking, Sphinx is about encrypting where semi-private keys is about signing. I think.

Also perhaps vaguely relevant is Dan Brown's recent publication on "The One-Up Problem in (EC)DSA" http://eprint.iacr.org/2008/286.ps .

I would be extremely grateful if a real cryptographer who has experience writing such papers were to volunteer to help.

However, I've resolved to stop being a scaredy-cat about it and just do my best. It really shouldn't be that hard to do.

The next step on this ticket is to write up a proof of security of the scheme. George Danezis and Ian Goldberg's recent work on `"Sphinx"` might be a good model to follow, as they used a nearly identical construction to achieve rather different security properties :-) <http://eprint.iacr.org/2008/475.pdf> Loosely speaking, `Sphinx` is about encrypting where `semi-private keys` is about signing. I think. Also perhaps vaguely relevant is Dan Brown's recent publication on "The One-Up Problem in (EC)DSA" <http://eprint.iacr.org/2008/286.ps> . I would be extremely grateful if a real cryptographer who has experience writing such papers were to volunteer to help. However, I've resolved to stop being a scaredy-cat about it and just do my best. It really shouldn't be *that* hard to do.
Author

adding Cc: tahoe-dev@allmydata.org, and then I'm going to re-post my previous comment.

adding Cc: tahoe-dev@allmydata.org, and then I'm going to re-post my previous comment.
Author

The next step on this ticket is to write up a proof of security of the scheme. George Danezis and Ian Goldberg's recent work on "Sphinx" might be a good model to follow, as they used a nearly identical construction to achieve rather different security properties :-) http://eprint.iacr.org/2008/475.pdf Loosely speaking, Sphinx is about encrypting where semi-private keys is about signing. I think.

Also perhaps vaguely relevant is Dan Brown's recent publication on "The One-Up Problem in (EC)DSA" http://eprint.iacr.org/2008/286.ps .

I would be extremely grateful if a real cryptographer who has experience writing such papers were to volunteer to help.

However, I've resolved to stop being a scaredy-cat about it and just do my best. It really shouldn't be that hard to do.

The next step on this ticket is to write up a proof of security of the scheme. George Danezis and Ian Goldberg's recent work on `"Sphinx"` might be a good model to follow, as they used a nearly identical construction to achieve rather different security properties :-) <http://eprint.iacr.org/2008/475.pdf> Loosely speaking, `Sphinx` is about encrypting where `semi-private keys` is about signing. I think. Also perhaps vaguely relevant is Dan Brown's recent publication on "The One-Up Problem in (EC)DSA" <http://eprint.iacr.org/2008/286.ps> . I would be extremely grateful if a real cryptographer who has experience writing such papers were to volunteer to help. However, I've resolved to stop being a scaredy-cat about it and just do my best. It really shouldn't be *that* hard to do.

Oh, here is a message I wrote last fall about a proposed API for the semi-private keys.

http://allmydata.org/pipermail/tahoe-dev/2008-October/000828.html

I'll also open a pycryptopp ticket for semi-private keys: pycryptopp#13

Oh, here is a message I wrote last fall about a proposed API for the semi-private keys. <http://allmydata.org/pipermail/tahoe-dev/2008-October/000828.html> I'll also open a pycryptopp ticket for semi-private keys: [pycryptopp#13](http://allmydata.org/trac/pycryptopp/ticket/13)
Author

The ECDSA-based mutable file approach was diagrammed and documented in http://allmydata.org/~zooko/lafs.pdf .

The ECDSA-based mutable file approach was diagrammed and documented in <http://allmydata.org/~zooko/lafs.pdf> .
Author

Add to the collection of mockery, contempt and suspicion:

<zooko> On a nearly completely unrelated topic, please check out my awesome blog and the great flamewar that I've spawned on the Open Source Initiative's mailing list
<zooko> http://testgrid.allmydata.org:3567/uri/URI:DIR2-RO:j74uhg25nwdpjpacl6rkat2yhm:kav7ijeft5h7r7rxdp5bgtlt3viv32yabqajkrdykozia5544jqa/wiki.html 
<mwhudson> zooko: that's one mighty url 

File under "mockery".

Add to the collection of mockery, contempt and suspicion: ``` <zooko> On a nearly completely unrelated topic, please check out my awesome blog and the great flamewar that I've spawned on the Open Source Initiative's mailing list <zooko> http://testgrid.allmydata.org:3567/uri/URI:DIR2-RO:j74uhg25nwdpjpacl6rkat2yhm:kav7ijeft5h7r7rxdp5bgtlt3viv32yabqajkrdykozia5544jqa/wiki.html <mwhudson> zooko: that's one mighty url ``` File under "mockery".
Author

Add to the collection of mockery, contempt and suspicion:

<zooko> Here's my blog:
	http://testgrid.allmydata.org:3567/uri/URI:DIR2-RO:j74uhg25nwdpjpacl6rkat2yhm:kav7ijeft5h7r7rxdp5bgtlt3viv32yabqajkrdykozia5544jqa/wiki.html
<glyph> whee ridiculous URLs
<glyph> zooko: is that number swiss? :
<glyph> :)

file under mockery

Add to the collection of mockery, contempt and suspicion: ``` <zooko> Here's my blog: http://testgrid.allmydata.org:3567/uri/URI:DIR2-RO:j74uhg25nwdpjpacl6rkat2yhm:kav7ijeft5h7r7rxdp5bgtlt3viv32yabqajkrdykozia5544jqa/wiki.html <glyph> whee ridiculous URLs <glyph> zooko: is that number swiss? : <glyph> :) ``` file under `mockery`
Author
<zooko> I blogged:
	<http://testgrid.allmydata.org:3567/uri/URI:DIR2-RO:j74uhg25nwdpjpacl6rkat2yhm:kav7ijeft5h7r7rxdp5bgtlt3viv32yabqajkrdykozia5544jqa/wiki.html>
<edcba> your url is really shitty
``` <zooko> I blogged: <http://testgrid.allmydata.org:3567/uri/URI:DIR2-RO:j74uhg25nwdpjpacl6rkat2yhm:kav7ijeft5h7r7rxdp5bgtlt3viv32yabqajkrdykozia5544jqa/wiki.html> <edcba> your url is really shitty ```
Author

I met Wayne Radinsky and one of the first things he said to me was:

Oh, cool, and you have a blog but that's the whackiest blog URL I've ever seen -- I guess it's temporary.
I met Wayne Radinsky and one of the first things he said to me was: ``` Oh, cool, and you have a blog but that's the whackiest blog URL I've ever seen -- I guess it's temporary. ```
Author

rootard is one of the creators of the Nexenta distribution:

<zooko> Laptop Versus Axe:
	http://testgrid.allmydata.org:3567/uri/URI:DIR2-RO:j74uhg25nwdpjpacl6rkat2yhm:kav7ijeft5h7r7rxdp5bgtlt3viv32yabqajkrdykozia5544jqa/wiki.html
<zooko> Yes, just a Python userland application, packaged up in the
	distributions.  Exactly.
<rootard> you really need tinyurl for these things :)
<zooko> Duly noted.
rootard is one of the creators of the Nexenta distribution: ``` <zooko> Laptop Versus Axe: http://testgrid.allmydata.org:3567/uri/URI:DIR2-RO:j74uhg25nwdpjpacl6rkat2yhm:kav7ijeft5h7r7rxdp5bgtlt3viv32yabqajkrdykozia5544jqa/wiki.html <zooko> Yes, just a Python userland application, packaged up in the distributions. Exactly. <rootard> you really need tinyurl for these things :) <zooko> Duly noted. ```
Author

I have realized that embedding an ECDSA public key directly into the capability doesn't allow for caps to be as short and secure as embedding a secure hash of an ECDSA key into the capability. That's because ECDSA keys have a crypto strength in bits which is half of their size in bits, but secure hash functions have a crypto strength in bits against second-pre-image attacks which is equal to their size in bits.

Now, traditionally in Tahoe we don't rely on a K-bit hash function for more than K/2 bits of security, because that way we don't have to think about the situation that collisions are feasible even though second-pre-images aren't. (Collision-resistance can't be better than K/2 because of the "Birthday Paradox".)

Now when we're talking about immutable file caps, we have to keep doing that, because the user-visible requirement on an immutable file cap is that there is exactly one file that can match it. So immutable file caps in the new crypto cap scheme will still need to be sufficiently long, let's say 256-bits long, that nobody can find a collision. They would look like this:

iaUVPT93Sw1xp9u1MX6JTrv5IMse29V34ZaZ2U8JK91E

(256 bits) (See http://allmydata.org/trac/tahoe/ticket/102 for some other details about formatting of caps.)

But for mutable files, the user-visible requirement is that an unauthorized writer can't create files that would match, which corresponds to second-pre-image resistance instead of collision-resistance, so the caps can look like this:

raIUXhc56d4U18EAlpxZph

(125 bits)

I think it would be valuable to have the latter kind of caps that are that much shorter. The smaller the caps are, the more uses people will adopt them for. The short 125-bit caps are within striking distance of "tiny urls". Here's the first tiny url that I found on twitter just now: http://bit.ly/ossLb .

There is a trade-off to this, however -- you can't do off-line diminish from a write-cap to a read-cap (or a verify-cap) in this scheme. On the other hand, the caps are small enough that you can carry around both the write-cap and the read-cap in the same space that would hold just the write-cap in the other scheme, which is just as good as doing off-line diminish from write-cap to read-cap, if you thought to keep the read-cap around.

Another trade-off is that this makes us vulnerable to weaknesses in a secure hash function in addition to weaknesses in the digital signature scheme.

(By the way, in the future, using this scheme would make it easier to use a digital signature scheme which has even more than 2K bits in its public or private keys. Of particular interest to me are schemes for post-quantum cryptography (http://cr.yp.to/talks/2008.10.18/slides.pdf ) such as "multivariate quadratic" signatures e.g. sflashv2. Here are benchmarks: http://bench.cr.yp.to/results-sign.html .)

I have realized that embedding an ECDSA public key directly into the capability doesn't allow for caps to be as short and secure as embedding a secure hash of an ECDSA key into the capability. That's because ECDSA keys have a crypto strength in bits which is half of their size in bits, but secure hash functions have a crypto strength in bits against second-pre-image attacks which is equal to their size in bits. Now, traditionally in Tahoe we don't rely on a `K`-bit hash function for more than `K/2` bits of security, because that way we don't have to think about the situation that collisions are feasible even though second-pre-images aren't. (Collision-resistance can't be better than `K/2` because of the "Birthday Paradox".) Now when we're talking about immutable file caps, we have to keep doing that, because the user-visible requirement on an immutable file cap is that there is exactly one file that can match it. So immutable file caps in the new crypto cap scheme will still need to be sufficiently long, let's say 256-bits long, that nobody can find a collision. They would look like this: ``` iaUVPT93Sw1xp9u1MX6JTrv5IMse29V34ZaZ2U8JK91E ``` (256 bits) (See <http://allmydata.org/trac/tahoe/ticket/102> for some other details about formatting of caps.) But for mutable files, the user-visible requirement is that an unauthorized writer can't create files that would match, which corresponds to second-pre-image resistance instead of collision-resistance, so the caps can look like this: ``` raIUXhc56d4U18EAlpxZph ``` (125 bits) I think it would be valuable to have the latter kind of caps that are that much shorter. The smaller the caps are, the more uses people will adopt them for. The short 125-bit caps are within striking distance of "tiny urls". Here's the first tiny url that I found on twitter just now: <http://bit.ly/ossLb> . There is a trade-off to this, however -- you can't do off-line diminish from a write-cap to a read-cap (or a verify-cap) in this scheme. On the other hand, the caps are small enough that you can carry around both the write-cap and the read-cap in the same space that would hold just the write-cap in the other scheme, which is just as good as doing off-line diminish from write-cap to read-cap, if you thought to keep the read-cap around. Another trade-off is that this makes us vulnerable to weaknesses in a secure hash function in addition to weaknesses in the digital signature scheme. (By the way, in the future, using this scheme would make it easier to use a digital signature scheme which has even more than 2K bits in its public or private keys. Of particular interest to me are schemes for post-quantum cryptography (<http://cr.yp.to/talks/2008.10.18/slides.pdf> ) such as "multivariate quadratic" signatures e.g. sflashv2. Here are benchmarks: <http://bench.cr.yp.to/results-sign.html> .)
swillden commented 2009-05-12 14:21:01 +00:00
Owner

Replying to zooko:

I have realized that embedding an ECDSA public key directly into the capability doesn't allow for caps to be as short and secure as embedding a secure hash of an ECDSA key into the capability. That's because ECDSA keys have a crypto strength in bits which is half of their size in bits

In your semi-private key scheme, they're a little weaker than that, because the keyspace is not flat. This slight weakening is probably irrelevant (and can certainly be addressed by adding a few extra bits of key size), but it's probably worth thinking about. Also, it occurs to me that perhaps there are other unidentified weaknesses in the semi-private key scheme which could be masked by putting hashes of keys in caps, rather than keys (though I confess I haven't read/thought enough to understand how hashes of keys are useful).

http://allmydata.org/pipermail/tahoe-dev/2009-February/001106.html

Replying to [zooko](/tahoe-lafs/trac/issues/217#issuecomment-364106): > I have realized that embedding an ECDSA public key directly into the capability doesn't allow for caps to be as short and secure as embedding a secure hash of an ECDSA key into the capability. That's because ECDSA keys have a crypto strength in bits which is half of their size in bits In your semi-private key scheme, they're a little weaker than that, because the keyspace is not flat. This slight weakening is probably irrelevant (and can certainly be addressed by adding a few extra bits of key size), but it's probably worth thinking about. Also, it occurs to me that perhaps there are other unidentified weaknesses in the semi-private key scheme which could be masked by putting hashes of keys in caps, rather than keys (though I confess I haven't read/thought enough to understand how hashes of keys are useful). <http://allmydata.org/pipermail/tahoe-dev/2009-February/001106.html>
Author

Shawn: thank you very much for this analysis. I agree that it is an issue. Ian Goldberg also pointed out this issue to me in private communication. I made a mistake in the paper by saying "let y = H(g^x)". Instead, when you're generating a "random" or unguessable ECC point, you should choose an exponent in the interval [0..n-1] uniformly (where n is the order of the group).

A typical way to do that is instead of taking the result mod n, you instead check whether the result is >= n, and if it is then "re-roll" for example by incrementing the input and trying again. :-) There is another way to do it which involves generating an extra 80 bits or so of your result and taking the whole thing mod n, in order to avoid the theoretically unbounded problem of re-rolling over and over, but that seems unnecessary to me, especially considering that the n's that we are talking about often start with lots of leading 1 bits. E.g., here is the order of the NIST 256-bit randomly-generated curve in hex: 0xFFFFFFFF00000000FFFFFFFFFFFFFFFFBCE6FAADA7179E84F3B9CAC2FC632551.

Anyway, we should amend the semi-private keys proposal which says "let y = H(g^x)" to define H as being "secure hash and then re-roll until it falls within the interval of [0..n-1]" instead opf being "secure hash and then mod n".

That completely solves the weakness that you've identified, right Shawn?

Thanks!

Shawn: thank you very much for this analysis. I agree that it is an issue. Ian Goldberg also pointed out this issue to me in private communication. I made a mistake in [the paper](@@http://testgrid.allmydata.org:3567/file/URI:CHK:xugaee6vb725rmuv326vjp3tsq:qshdla2meskmu4n6mgg57ppez4cmxqlmj7hsqydjf2r5uzd6twca:3:10:275101/@@named=/lafs.pdf@@) by saying `"let y = H(g^x)"`. Instead, when you're generating a "random" or unguessable ECC point, you should choose an exponent in the interval `[0..n-1]` uniformly (where `n` is the order of the group). A typical way to do that is instead of taking the result `mod n`, you instead check whether the result is `>= n`, and if it is then "re-roll" for example by incrementing the input and trying again. :-) There is another way to do it which involves generating an extra 80 bits or so of your result and taking the whole thing `mod n`, in order to avoid the theoretically unbounded problem of re-rolling over and over, but that seems unnecessary to me, especially considering that the `n`'s that we are talking about often start with lots of leading 1 bits. E.g., here is the order of the NIST 256-bit randomly-generated curve in hex: `0xFFFFFFFF00000000FFFFFFFFFFFFFFFFBCE6FAADA7179E84F3B9CAC2FC632551`. Anyway, we should amend the semi-private keys proposal which says `"let y = H(g^x)"` to define `H` as being "secure hash and then re-roll until it falls within the interval of `[0..n-1]`" instead opf being "secure hash and then `mod n`". That completely solves the weakness that you've identified, right Shawn? Thanks!
swillden commented 2009-05-12 22:46:18 +00:00
Owner

Replying to zooko:

That completely solves the weakness that you've identified, right Shawn?

I may be missing something, but I don't think it does.

The issue I referred to has to do not with the generation of y, but of the multiplication of x by y (mod q), and the subsequent use of xy as the signing key. The problem is that the distribution of xy mod q values is not uniform.

I should mention that it's been years since I studied ECDSA and I don't at present understand anything about how the signing key xy is used to perform a signing operation. I'm just noting that your method for constructing the signing key results in some signing keys being more likely than others.

Replying to [zooko](/tahoe-lafs/trac/issues/217#issuecomment-364108): > That completely solves the weakness that you've identified, right Shawn? I may be missing something, but I don't think it does. The issue I referred to has to do not with the generation of y, but of the multiplication of x by y (mod q), and the subsequent use of xy as the signing key. The problem is that the distribution of xy mod q values is not uniform. I should mention that it's been years since I studied ECDSA and I don't at present understand anything about how the signing key xy is used to perform a signing operation. I'm just noting that your method for constructing the signing key results in some signing keys being more likely than others.

hrm, I think we should move the discussion about semi-private keys to the pycryptopp ticket http://allmydata.org/trac/pycryptopp/ticket/13 , and leave this ticket to talk about how we might use semiprivate keys for DSA-based mutable files.

Shawn: I have a response to your comment, which I'll put on pycryptopp ticket 13.

hrm, I think we should move the discussion about semi-private keys to the pycryptopp ticket <http://allmydata.org/trac/pycryptopp/ticket/13> , and leave this ticket to talk about how we might use semiprivate keys for DSA-based mutable files. Shawn: I have a response to your comment, which I'll put on pycryptopp ticket 13.
Author

Adding idnar to the roll-call of people who spontaneously complain about the current Tahoe URLs:

<secorp> Your blog is viewable by me -
	 http://testgrid.allmydata.org:3567/uri/URI:DIR2-RO:j74uhg25nwdpjpacl6rkat2yhm:kav7ijeft5h7r7rxdp5bgtlt3viv32yabqajkrdykozia5544jqa/wiki.html
<idnar> man, IRC sucks							
<idnar> (specifically, because I have to stare at that big ugly URI)
Adding idnar to the roll-call of people who spontaneously complain about the current Tahoe URLs: ``` <secorp> Your blog is viewable by me - http://testgrid.allmydata.org:3567/uri/URI:DIR2-RO:j74uhg25nwdpjpacl6rkat2yhm:kav7ijeft5h7r7rxdp5bgtlt3viv32yabqajkrdykozia5544jqa/wiki.html <idnar> man, IRC sucks <idnar> (specifically, because I have to stare at that big ugly URI) ```

It occurred to me the other night that, if we can make pycryptopp#13 semi-private DSA keys work, then we could have a super-simple mutable-file cap scheme as follows:

  • assume K=128 bits (might be comfortable with 96 bits), this is the security parameter
  • create K-bit random seed, this is the writecap (128 bits)
  • derive 2K-bit semi-private DSA key: this is the readcap (256 bits)
  • hash semi-private key to get the symmetric data-protection key (or rather a value that is used to derive it.. SDMF has a per-version salt, MDMF has a per-segment-per-version salt)
  • derive 2K-bit verifying key: this is the verifycap (256 bits)
  • either use the verifying key as a storage-index, or hash it, or truncate it. Store a copy of the verifying key in the share for the benefit of server-side validation.

For #308 deep-traversal dircaps, insert another semi-private key step between the readcap and the verifycap.

This would give us i.e. 128-bit writecaps, 256-bit readcaps, offline attenuation, full server-side verification of every bit of the share, and minimal roundtrips (no need to fetch an encrypted private key before creating new shares).

It occurred to me the other night that, if we can make [pycryptopp#13](http://allmydata.org/trac/pycryptopp/ticket/13) semi-private DSA keys work, then we could have a super-simple mutable-file cap scheme as follows: * assume K=128 bits (might be comfortable with 96 bits), this is the security parameter * create K-bit random seed, this is the writecap (128 bits) * derive 2K-bit semi-private DSA key: this is the readcap (256 bits) * hash semi-private key to get the symmetric data-protection key (or rather a value that is used to derive it.. SDMF has a per-version salt, MDMF has a per-segment-per-version salt) * derive 2K-bit verifying key: this is the verifycap (256 bits) * either use the verifying key as a storage-index, or hash it, or truncate it. Store a copy of the verifying key in the share for the benefit of server-side validation. For #308 deep-traversal dircaps, insert another semi-private key step between the readcap and the verifycap. This would give us i.e. 128-bit writecaps, 256-bit readcaps, offline attenuation, full server-side verification of every bit of the share, and minimal roundtrips (no need to fetch an encrypted private key before creating new shares).
Author

So up until the last bullet point concerning the storage-index, I think what you describe is what I diagrammed in Figure 3 of lafs.pdf. Does that look right?

I agree this scheme has many good properties. I do still have a concern that 256-bit read-caps (e.g. <http://127.0.0.1:8234/c/r_FRPG24yB7Amho6NoWaaJlBrU7lON7AyiChWRcaQZ1pH> or <http://127.0.0.1:8234/c/D_FRPG24yB7Amho6NoWaaJlBrU7lON7AyiChWRcaQZ1pH/path/to/sub/file.txt>) might be long enough to exclude Tahoe from some interesting uses where 125-bit read-caps (e.g. <http://127.0.0.1:8234/c/r_FMK3eUypHbj6uLocF0496> or <http://127.0.0.1:8234/c/D_FMK3eUypHbj6uLocF0496/path/to/sub/file.txt> would fit.

Have you ever looked at http://bench.cr.yp.to ? In particular, this page -- http://bench.cr.yp.to/results-sign.html -- shows measurements of a bunch of digital signature schemes, including key-generation time, public-key size, and private-key size. Compare "ecdonaldp256" (which is ecdsa with a 256-bit curve) with the one named "hector" (which is a hyperelliptic curve scheme).

Hector has estimated 113-bit security (compared to ecdonaldp256's 128-bit security), verifies signatures in about half as many CPU cycles, generates signatures in about one eighth as many CPU cycles, generates key pairs in about one eighth as many CPU cycles. In theory (I think) hyperelliptic curve public keys can be compressed down to the size of the curve, in this case 113 bits, just like elliptic curve pubkeys can be compressed down to the size of the curve, although implementations of both measured here don't do that sort of compression.

Here's the source code of the hyperelliptic curve implementation:

http://cryptojedi.org/crypto/index.shtml

113 bits should be enough for now in my opinion.

The fatal flaw of the hector algorithm is that it isn't mature. The only known implementation is described as a "demo", it doesn't work at all unless your CPU has SSE2 instructions, and it doesn't compile out of the box on my Ubuntu Jaunty amd64. Figuring out how to compress the public keys and finding or creating a portable implementation sounds like way too hard of a job for us. Hopefully in a few years other people who know more about implementing such things than us will have done so and we can rely on their implementations, but for now I think we have to reluctantly pass up the opportunity to be the first ever serious users of hyperelliptic curve cryptography. :-)

Lacking hyperelliptic curve cryptography, we have to make a trade-off between the larger size of having a full elliptic curve point in the cap and the disadvantages of having the key stored on the servers instead of in the caps.

I'm not entirely sure about those disadvantages. We've previously talked about several, but on closer inspection I'm not sure if they are actual disadvantages. You (Brian) nicely summarized some of those at the end of your note:

  • offline attenuation (By the way, let's call this action "diminishing" a capability. "Attenuating" is something that you do to authority, and it is a very general and flexible notion -- you can imagine writing arbitrary code or even having a human in the loop making the decisions which result in the authority being attenuated. "Diminishing" is something that you do to a capability, and it only goes from one specific thing to another specific thing. I called this operation "diminishing" in lafs.pdf in order to follow the terminology of Jonathan Shapiro's thesis about EROS (http://www.eros-os.org/papers/shap-thesis.ps), where he defined the "Diminish-Take" access model as an extension of the standard "Take-Grant" access model. The addition he added was an operation named "diminish", the effect of which was to produce a capability which offered transitive read-only access to whatever the original capability could access. The first kind of "diminishing" that we wanted in Tahoe was for precisely this same purpose, so that's why I used that word. Of course, the next kind of diminishing that we wanted was for something else -- producing a verify cap from a read cap. Oh well. Anyway, since I've already committed to "diminish" in publishing lafs.pdf, and since it might be useful to have the word "attenuation" separately as being what you do to authority in general, let's call this operation "diminish".)

Oh dear it is way past my bedtime. I'll continue this tomorrow.

So up until the last bullet point concerning the storage-index, I think what you describe is what I diagrammed in Figure 3 of [lafs.pdf](http://allmydata.org/~zooko/lafs.pdf). Does that look right? I agree this scheme has many good properties. I do still have a concern that 256-bit read-caps (e.g. `<http://127.0.0.1:8234/c/r_FRPG24yB7Amho6NoWaaJlBrU7lON7AyiChWRcaQZ1pH>` or `<http://127.0.0.1:8234/c/D_FRPG24yB7Amho6NoWaaJlBrU7lON7AyiChWRcaQZ1pH/path/to/sub/file.txt>`) might be long enough to exclude Tahoe from some interesting uses where 125-bit read-caps (e.g. `<http://127.0.0.1:8234/c/r_FMK3eUypHbj6uLocF0496>` or `<http://127.0.0.1:8234/c/D_FMK3eUypHbj6uLocF0496/path/to/sub/file.txt>` would fit. Have you ever looked at <http://bench.cr.yp.to> ? In particular, this page -- <http://bench.cr.yp.to/results-sign.html> -- shows measurements of a bunch of digital signature schemes, including key-generation time, public-key size, and private-key size. Compare "ecdonaldp256" (which is ecdsa with a 256-bit curve) with the one named "hector" (which is a hyperelliptic curve scheme). Hector has estimated 113-bit security (compared to ecdonaldp256's 128-bit security), verifies signatures in about half as many CPU cycles, generates signatures in about one eighth as many CPU cycles, generates key pairs in about one eighth as many CPU cycles. In theory (I think) hyperelliptic curve public keys can be compressed down to the size of the curve, in this case 113 bits, just like elliptic curve pubkeys can be compressed down to the size of the curve, although implementations of both measured here don't do that sort of compression. Here's the source code of the hyperelliptic curve implementation: <http://cryptojedi.org/crypto/index.shtml> 113 bits should be enough for now in my opinion. The fatal flaw of the hector algorithm is that it isn't mature. The only known implementation is described as a "demo", it doesn't work at all unless your CPU has SSE2 instructions, and it doesn't compile out of the box on my Ubuntu Jaunty amd64. Figuring out how to compress the public keys and finding or creating a portable implementation sounds like way too hard of a job for us. Hopefully in a few years other people who know more about implementing such things than us will have done so and we can rely on their implementations, but for now I think we have to reluctantly pass up the opportunity to be the first ever serious users of hyperelliptic curve cryptography. :-) Lacking hyperelliptic curve cryptography, we have to make a trade-off between the larger size of having a full elliptic curve point in the cap and the disadvantages of having the key stored on the servers instead of in the caps. I'm not entirely sure about those disadvantages. We've previously talked about several, but on closer inspection I'm not sure if they are actual disadvantages. You (Brian) nicely summarized some of those at the end of your note: * offline attenuation (By the way, let's call this action "diminishing" a capability. "Attenuating" is something that you do to authority, and it is a very general and flexible notion -- you can imagine writing arbitrary code or even having a human in the loop making the decisions which result in the authority being attenuated. "Diminishing" is something that you do to a capability, and it only goes from one specific thing to another specific thing. I called this operation "diminishing" in lafs.pdf in order to follow the terminology of Jonathan Shapiro's thesis about EROS (<http://www.eros-os.org/papers/shap-thesis.ps>), where he defined the "Diminish-Take" access model as an extension of the standard "Take-Grant" access model. The addition he added was an operation named "diminish", the effect of which was to produce a capability which offered transitive read-only access to whatever the original capability could access. The first kind of "diminishing" that we wanted in Tahoe was for precisely this same purpose, so that's why I used that word. Of course, the *next* kind of diminishing that we wanted was for something else -- producing a verify cap from a read cap. Oh well. Anyway, since I've already committed to "diminish" in publishing lafs.pdf, and since it might be useful to have the word "attenuation" separately as being what you do to authority in general, let's call this operation "diminish".) Oh dear it is way past my bedtime. I'll continue this tomorrow.

Yes, it's the same scheme as in your paper. I must have been misremembering that diagram.. somehow I thought there was still an encrypted private key in there somewhere.

It's a shame that "diminishing"/"diminishment"/("dimunition"?) isn't as easy to say as "attenuation" :). I'll have to read shap's thesis.. I haven't heard "diminishing" from anybody else, whereas I hear "attenuation" all the time. But I'll try to follow your lead.

I guess I'll wait for the rest of your response before trying to follow up.

Yes, it's the same scheme as in your paper. I must have been misremembering that diagram.. somehow I thought there was still an encrypted private key in there somewhere. It's a shame that "diminishing"/"diminishment"/("dimunition"?) isn't as easy to say as "attenuation" :). I'll have to read shap's thesis.. I haven't heard "diminishing" from anybody else, whereas I hear "attenuation" all the time. But I'll try to follow your lead. I guess I'll wait for the rest of your response before trying to follow up.
Author

I haven't heard "diminishing" from anybody else, whereas I hear "attenuation" all the time.

That makes sense. I think there's a good reason for that, which is that the folks you are talking to (the modern obj-cap crowd) are working with the more general, programmable, dynamic kind of authority-reduction, since they are working at the programming language level. I would be interested to see if the people who also have a foot in the operating system level use this or that terminology. I'm not married to it, so maybe we could bring it up at the next friam that includes MarkM (terminology clarifier extraordinaire). Currently I think it is useful to have two words -- I refer to "diminishing" a write cap to produce a read cap, and I also refer to "attenuation" of a storage authority in the accounting scheme (where there is more variety of what sorts of limitations can be combined with one another). If that distinction isn't sensible or useful then I don't mind switching to "attenuation" from now on and letting that detail of lafs.pdf terminology become obsolete.

I guess basically I think of diminishing a cap as one particular way of implementing attenuation of authority. It is a specific way that I am particularly interested in, and I hope to attract the interest of cryptographers who will write papers about "efficient off-line diminishing of capabilities using pairing-based cryptography in hyperelliptic curves of 2-rank one", or whatever brain-busting gobbledygook those cryptographers are always coming up with. :-)

> I haven't heard "diminishing" from anybody else, whereas I hear "attenuation" all the time. That makes sense. I think there's a good reason for that, which is that the folks you are talking to (the modern obj-cap crowd) are working with the more general, programmable, dynamic kind of authority-reduction, since they are working at the programming language level. I would be interested to see if the people who also have a foot in the operating system level use this or that terminology. I'm not married to it, so maybe we could bring it up at the next friam that includes MarkM (terminology clarifier extraordinaire). Currently I think it is useful to have two words -- I refer to "diminishing" a write cap to produce a read cap, and I also refer to "attenuation" of a storage authority in the accounting scheme (where there is more variety of what sorts of limitations can be combined with one another). If that distinction isn't sensible or useful then I don't mind switching to "attenuation" from now on and letting that detail of lafs.pdf terminology become obsolete. I guess basically I think of diminishing a cap as one particular *way* of implementing attenuation of authority. It is a specific way that I am particularly interested in, and I hope to attract the interest of cryptographers who will write papers about "efficient off-line diminishing of capabilities using pairing-based cryptography in hyperelliptic curves of 2-rank one", or whatever brain-busting gobbledygook those cryptographers are always coming up with. :-)
Author

This guy I don't previously know named Dhananjay Nene, dnene on twitter, wrote: "@zooko have you ever documented what the long URL on your klog is for ? Spooks me every time .. and I always wonder.".

I added a note to the NewCapDesign web page specifying short-and-sweet as a separate desideratum from cut-and-pastable.

This guy I don't previously know named Dhananjay Nene, dnene on twitter, [wrote](http://twitter.com/dnene/statuses/3412316289): "@zooko have you ever documented what the long URL on your klog is for ? Spooks me every time .. and I always wonder.". I [added a note](http://allmydata.org/trac/tahoe/wiki/NewCapDesign?action=diff&version=6&old_version=5) to the [NewCapDesign](wiki/NewCapDesign) web page specifying short-and-sweet as a separate desideratum from cut-and-pastable.
swillden commented 2009-08-24 16:59:39 +00:00
Owner

The entropy required for high security precludes truly "short and sweet" URLs as long as the key is embedded in the URL.

I think this is a strong argument for variable-security aliases, and perhaps even user-selectable aliases.

The entropy required for high security precludes truly "short and sweet" URLs as long as the key is embedded in the URL. I think this is a strong argument for variable-security aliases, and perhaps even user-selectable aliases.

Tagging issues relevant to new cap protocol design.

Tagging issues relevant to new cap protocol design.
Author

Argh! I just encountered a new example of how the current Tahoe-LAFS caps are too long to be acceptable to most users.

I had commented on the blog of (good security researcher) Nate Lawson -- http://rdist.root.org/2009/12/30/side-channel-attacks-on-cryptographic-software/ -- and included a link to my klog, namely this link:

http://testgrid.allmydata.org:3567/uri/URI:DIR2-RO:j74uhg25nwdpjpacl6rkat2yhm:kav7ijeft5h7r7rxdp5bgtlt3viv32yabqajkrdykozia5544jqa/wiki.html

He edited my comment and replaced that link with this:

http://allmydata.org/pipermail/tahoe-dev/2010-January/003476.html

Thus changing my comment from linking to my blog to linking to my mailing list message, which is not what I had intended.

I assume that Nate Lawson did this because the URL to my blog is too long and ugly.

So this makes me feel renewed motivation to invent new Tahoe-LAFS caps which are substantially shorter than the current ones.

Argh! I just encountered a new example of how the current Tahoe-LAFS caps are too long to be acceptable to most users. I had commented on the blog of (good security researcher) Nate Lawson -- <http://rdist.root.org/2009/12/30/side-channel-attacks-on-cryptographic-software/> -- and included a link to my klog, namely this link: <http://testgrid.allmydata.org:3567/uri/URI:DIR2-RO:j74uhg25nwdpjpacl6rkat2yhm:kav7ijeft5h7r7rxdp5bgtlt3viv32yabqajkrdykozia5544jqa/wiki.html> He edited my comment and replaced that link with this: <http://allmydata.org/pipermail/tahoe-dev/2010-January/003476.html> Thus changing my comment from linking to my blog to linking to my mailing list message, which is not what I had intended. I assume that Nate Lawson did this because the URL to my blog is too long and ugly. So this makes me feel renewed motivation to invent new Tahoe-LAFS caps which are substantially shorter than the current ones.

hooray! renewed motivation! so maybe ECDSA will happen soon? :-)

hooray! renewed motivation! so maybe ECDSA will happen soon? :-)
Author

:-) Well, my current priorities are the Tahoe-LAFS v1.6 release (and the RSA 2010 conference). Here is what we need to do next for ECDSA implementation in pycryptopp. Others can help! One of these items is simply to do a code review and apply a patch!

http://allmydata.org/trac/pycryptopp/ticket/30# release EC-DSA
http://allmydata.org/trac/pycryptopp/ticket/3# serialize ecdsa keys without the fluff
http://allmydata.org/trac/pycryptopp/ticket/2# choose+implement EC-DSA KDF: deterministic generation of private key from small seed

If we are going to us HKDF-Poly1305:
http://allmydata.org/trac/pycryptopp/ticket/33# implement Poly1305 or VMAC

If we are going to use semi-private keys:
http://allmydata.org/trac/pycryptopp/ticket/13# DSA "semi-private"/intermediate keys

:-) Well, my current priorities are the Tahoe-LAFS v1.6 release (and the RSA 2010 conference). Here is what we need to do next for ECDSA implementation in pycryptopp. Others can help! One of these items is simply to do a code review and apply a patch! <http://allmydata.org/trac/pycryptopp/ticket/30># release EC-DSA <http://allmydata.org/trac/pycryptopp/ticket/3># serialize ecdsa keys without the fluff <http://allmydata.org/trac/pycryptopp/ticket/2># choose+implement EC-DSA KDF: deterministic generation of private key from small seed If we are going to us HKDF-Poly1305: <http://allmydata.org/trac/pycryptopp/ticket/33># implement Poly1305 or VMAC If we are going to use semi-private keys: <http://allmydata.org/trac/pycryptopp/ticket/13># DSA "semi-private"/intermediate keys

I'm confused. Shortening the read caps does not depend on ECDSA. The semi-private key approach depends on ECDSA (to get ~2K bit read caps), but that approach doesn't give the shortest read caps. The mutable read caps in the Elk Point designs would be about K+50 bits, i.e. nearly a 1/3rd shorter, regardless of public key algorithm, and without depending on semi-private keys. (The immutable read caps are also 1/3rd shorter -- 2K instead of 3K.)

[is the size of the cryptovalues; there is obviously some fixed overhead in the URL encoding.]This

I'm confused. Shortening the read caps does not depend on ECDSA. The semi-private key approach depends on ECDSA (to get ~2K bit read caps), but that approach doesn't give the shortest read caps. The mutable read caps in the Elk Point designs would be about K+50 bits, i.e. nearly a 1/3rd shorter, regardless of public key algorithm, and without depending on semi-private keys. (The immutable read caps are also 1/3rd shorter -- 2K instead of 3K.) [is the size of the cryptovalues; there is obviously some fixed overhead in the URL encoding.]This

Replying to davidsarah:

... Shortening the read caps does not depend on ECDSA.

I should clarify that ECDSA would help with fast mutable file creation.

Replying to [davidsarah](/tahoe-lafs/trac/issues/217#issuecomment-364122): > ... Shortening the read caps does not depend on ECDSA. I should clarify that ECDSA *would* help with fast mutable file creation.
Author

Nate Lawson pointed out on his blog that my comments kept hitting his spam filter due to the long URL.

Nate Lawson pointed out on his blog that my comments kept hitting his spam filter due to the long URL.

Zooko: interesting! A spam filter that keys off the length of URL! I wonder
if the assumption is that it takes a human being to come up with short names,
and that robots are only capable of coming up with long random ones? That
seems to be the thinking behind some other comments you've transcribed, from
humans saying they distrust the tahoe URLs because they "smell spammy".

David-Sarah: I was mainly thinking of speed: ECDSA (and other schemes for
which key generation does not involve the creation of new prime numbers) will
be way faster than our current RSA approach. I'm also assuming that we'll end
up using the simplest approaches we've discussed so far, which mostly involve
ECDSA (with or without semi-private keys). There are schemes we can use that
reduce cap length without improving generation speed, but I'd rather make one
incompatible change for two simultaneous benefits than make two separate
incompatible changes for one benefit each. (but I'll take this as motivation
to review and comment upon the latest Elk Point designs you've done, and I
look forward to hearing more about its successor).

Also, I've been hounding Zooko (usually offline) to finish ECDSA for years,
since there are lots of other projects that have been waiting on it. So I'll
take any opportunity to encourage him that I can get :).

Zooko: interesting! A spam filter that keys off the length of URL! I wonder if the assumption is that it takes a human being to come up with short names, and that robots are only capable of coming up with long random ones? That seems to be the thinking behind some other comments you've transcribed, from humans saying they distrust the tahoe URLs because they "smell spammy". David-Sarah: I was mainly thinking of speed: ECDSA (and other schemes for which key generation does not involve the creation of new prime numbers) will be way faster than our current RSA approach. I'm also assuming that we'll end up using the simplest approaches we've discussed so far, which mostly involve ECDSA (with or without semi-private keys). There are schemes we can use that reduce cap length without improving generation speed, but I'd rather make one incompatible change for two simultaneous benefits than make two separate incompatible changes for one benefit each. (but I'll take this as motivation to review and comment upon the latest Elk Point designs you've done, and I look forward to hearing more about its successor). Also, I've been hounding Zooko (usually offline) to finish ECDSA for years, since there are lots of other projects that have been waiting on it. So I'll take any opportunity to encourage him that I can get :).
Author

Yeah, not speaking for Brian but for myself I want to have as few crypto formats to support as possible, so I would like to jump straight from the current cyrpto structure to the best crypto structure I can. Unfortunately this seems to have paralyzed my forward progress for a couple of years now as I am always learning about new improved crypto primitives and structures (like Elk Point 2, HKDF-Poly1305-XSalsa20, hash-function-combiners, cipher-combiners...) that are EVEN BETTER than the ones I had previously thought of.

Along these lines, I'm currently feeling a bit polarized about Brian's preference for simplicity vs. the features of Elk Point 2. I highly value short URLs and long-lived crypto, and at least to some degree I would be willing to accept complexity in return for those values.

I think polarization is what I get when people express value judgments without a lot of technical detail. When I get into comparing technical details then even if I still disagree with someone else's preference, at least I don't feel as frustrated about it -- I can look at a table of tradeoffs and say "Okay I can live with this and that tradeoff". I think the way forward on that issue is to make comparable documentation for the three current candidates (Elk Point 2, Simple, Semi-Private-Keys), or maybe for David-Sarah to divulge their latest ideas.

By the way, the reason I keep posting on this ticket about people who complain about Tahoe-LAFS URLs, bots that ban Tahoe-LAFS URLs, etc. etc. is to show that the issue with long URLs is not just my personal preference. There seems to be plenty of evidence that long URLs are unacceptable to a significant, perhaps overwhelming, fraction of users. One of the data points that isn't already recorded on this ticket is that as soon as allmydata.com had paid Brian and me to invent Tahoe-LAFS, they then immediately paid someone else to invent a tiny-url-central-database to hide Tahoe-LAFS URLS.

If anyone has any evidence that users are okay using Tahoe-LAFS-sized URLs, please post it to this ticket! As far as I know, I'm the only human in the universe who doesn't mind using Tahoe-LAFS URLs on the Web. (Note: I don't mean putting Tahoe-LAFS caps in your aliases files or whatever, I mean on the Web. Sharing the URLs with other people, posting them on blogs, etc. etc.) Of course, I am not a representative data point for this issue since I am not only a hacker but also a Tahoe-LAFS hacker. If you are a hacker and you don't mind using Tahoe-LAFS URLs, I would like to know it, but I would be even more interested if your mom is okay using Tahoe-LAFS URLs. But I'll take whatever data points I can get, because I think making a major technical decision about something like URL size without considering real world observations of user preferences is a sin (akin to optimizing without measuring). :-)

Yeah, not speaking for Brian but for myself I want to have as few crypto formats to support as possible, so I would like to jump straight from the current cyrpto structure to the best crypto structure I can. Unfortunately this seems to have paralyzed my forward progress for a couple of years now as I am always learning about new improved crypto primitives and structures (like Elk Point 2, HKDF-Poly1305-XSalsa20, hash-function-combiners, cipher-combiners...) that are EVEN BETTER than the ones I had previously thought of. Along these lines, I'm currently feeling a bit polarized about Brian's preference for simplicity vs. the features of Elk Point 2. I highly value short URLs and long-lived crypto, and at least to some degree I would be willing to accept complexity in return for those values. I think polarization is what I get when people express value judgments without a lot of technical detail. When I get into comparing technical details then even if I still disagree with someone else's preference, at least I don't feel as frustrated about it -- I can look at a table of tradeoffs and say "Okay I can live with this and that tradeoff". I think the way forward on that issue is to make comparable documentation for the three current candidates (Elk Point 2, Simple, Semi-Private-Keys), or maybe for David-Sarah to divulge their latest ideas. By the way, the reason I keep posting on this ticket about people who complain about Tahoe-LAFS URLs, bots that ban Tahoe-LAFS URLs, etc. etc. is to show that the issue with long URLs is not just my personal preference. There seems to be plenty of evidence that long URLs are unacceptable to a significant, perhaps overwhelming, fraction of users. One of the data points that isn't already recorded on this ticket is that as soon as allmydata.com had paid Brian and me to invent Tahoe-LAFS, they then immediately paid someone else to invent a tiny-url-central-database to hide Tahoe-LAFS URLS. If anyone has any evidence that users are okay using Tahoe-LAFS-sized URLs, please post it to this ticket! As far as I know, I'm the only human in the universe who doesn't mind using Tahoe-LAFS URLs on the Web. (Note: I don't mean putting Tahoe-LAFS caps in your aliases files or whatever, I mean on the Web. Sharing the URLs with other people, posting them on blogs, etc. etc.) Of course, I am not a representative data point for this issue since I am not only a hacker but also a Tahoe-LAFS hacker. If you are a hacker and you don't mind using Tahoe-LAFS URLs, I would like to know it, but I would be even more interested if your mom is okay using Tahoe-LAFS URLs. But I'll take whatever data points I can get, because I think making a major technical decision about something like URL size without considering real world observations of user preferences is a sin (akin to optimizing without measuring). :-)

I am a hacker and I do mind using Tahoe URLs, primarily because they wrap. That usually requires manual fiddling to get a web browser to accept a Tahoe gateway URL that is embedded in email, rather than a single click. If they were less than 75 characters, it'd be fine.

However, this ticket is about ECDSA! I will copy all the comments about URLs from here to a new ticket, #882.

I am a hacker and I **do** mind using Tahoe URLs, primarily because they wrap. That usually requires manual fiddling to get a web browser to accept a Tahoe gateway URL that is embedded in email, rather than a single click. If they were less than 75 characters, it'd be fine. However, this ticket is about ECDSA! I will copy all the comments about URLs from here to a new ticket, #882.

Replying to warner:

David-Sarah: I was mainly thinking of speed: ECDSA (and other schemes for
which key generation does not involve the creation of new prime numbers) will
be way faster than our current RSA approach.

Absolutely -- I have no objection at all to switching to ECDSA for that reason, or to doing so at the same time as other changes in the cap protocol. I just think we should be clear that cap length is not a particularly significant reason for doing so.

I'm also assuming that we'll end up using the simplest approaches we've discussed so far, which mostly involve ECDSA (with or without semi-private keys).

The semi-private key scheme depends on ECDSA. The other scheme in NewMutableEncodingDesign doesn't; it is IMHO simpler, and certainly much less dependent on novel public-key crypto that is difficult to analyse.

Replying to [warner](/tahoe-lafs/trac/issues/217#issuecomment-364125): > David-Sarah: I was mainly thinking of speed: ECDSA (and other schemes for > which key generation does not involve the creation of new prime numbers) will > be way faster than our current RSA approach. Absolutely -- I have no objection at all to switching to ECDSA for that reason, or to doing so at the same time as other changes in the cap protocol. I just think we should be clear that cap length is not a particularly significant reason for doing so. > I'm also assuming that we'll end up using the simplest approaches we've discussed so far, which mostly involve ECDSA (with or without semi-private keys). The semi-private key scheme depends on ECDSA. The other scheme in [NewMutableEncodingDesign](wiki/NewMutableEncodingDesign) doesn't; it is IMHO simpler, and certainly much less dependent on novel public-key crypto that is difficult to analyse.
daira changed title from DSA-based mutable files -- small URLs, fast file creation to ECDSA-based mutable files -- fast file creation, possibly smaller URLs 2010-01-07 07:44:19 +00:00
zooko modified the milestone from eventually to 2.0.0 2010-02-23 03:08:50 +00:00

During the second Tahoe-LAFS summit (the long conversation in the restaurant :-) we settled on Ed25519 rather than ECDSA for signing introducer messages, and I think we also acknowledged that this would be likely to steer us toward choosing Ed25519 for mutable files. It's not strictly necessary to use the same algorithm, but I argued for using the same one on complexity grounds (and for Ed25519 being a good choice, if we haven't found a hash-based algorithm with a better performance/signature size tradeoff).

During the second Tahoe-LAFS summit (the long conversation in the restaurant :-) we settled on Ed25519 rather than ECDSA for signing introducer messages, and I think we also acknowledged that this would be likely to steer us toward choosing Ed25519 for mutable files. It's not strictly necessary to use the same algorithm, but I argued for using the same one on complexity grounds (and for Ed25519 being a good choice, if we haven't found a hash-based algorithm with a better performance/signature size tradeoff).
daira changed title from ECDSA-based mutable files -- fast file creation, possibly smaller URLs to Ed25519-based mutable files -- fast file creation, possibly smaller URLs 2012-02-21 23:46:33 +00:00

Ticket retargeted after milestone closed (editing milestones)

Ticket retargeted after milestone closed (editing milestones)
meejah removed this from the 2.0.0 milestone 2021-03-30 18:40:46 +00:00
Sign in to join this conversation.
No labels
c/code
c/code-dirnodes
c/code-encoding
c/code-frontend
c/code-frontend-cli
c/code-frontend-ftp-sftp
c/code-frontend-magic-folder
c/code-frontend-web
c/code-mutable
c/code-network
c/code-nodeadmin
c/code-peerselection
c/code-storage
c/contrib
c/dev-infrastructure
c/docs
c/operational
c/packaging
c/unknown
c/website
kw:2pc
kw:410
kw:9p
kw:ActivePerl
kw:AttributeError
kw:DataUnavailable
kw:DeadReferenceError
kw:DoS
kw:FileZilla
kw:GetLastError
kw:IFinishableConsumer
kw:K
kw:LeastAuthority
kw:Makefile
kw:RIStorageServer
kw:StringIO
kw:UncoordinatedWriteError
kw:about
kw:access
kw:access-control
kw:accessibility
kw:accounting
kw:accounting-crawler
kw:add-only
kw:aes
kw:aesthetics
kw:alias
kw:aliases
kw:aliens
kw:allmydata
kw:amazon
kw:ambient
kw:annotations
kw:anonymity
kw:anonymous
kw:anti-censorship
kw:api_auth_token
kw:appearance
kw:appname
kw:apport
kw:archive
kw:archlinux
kw:argparse
kw:arm
kw:assertion
kw:attachment
kw:auth
kw:authentication
kw:automation
kw:avahi
kw:availability
kw:aws
kw:azure
kw:backend
kw:backoff
kw:backup
kw:backupdb
kw:backward-compatibility
kw:bandwidth
kw:basedir
kw:bayes
kw:bbfreeze
kw:beta
kw:binaries
kw:binutils
kw:bitcoin
kw:bitrot
kw:blacklist
kw:blocker
kw:blocks-cloud-deployment
kw:blocks-cloud-merge
kw:blocks-magic-folder-merge
kw:blocks-merge
kw:blocks-raic
kw:blocks-release
kw:blog
kw:bom
kw:bonjour
kw:branch
kw:branding
kw:breadcrumbs
kw:brians-opinion-needed
kw:browser
kw:bsd
kw:build
kw:build-helpers
kw:buildbot
kw:builders
kw:buildslave
kw:buildslaves
kw:cache
kw:cap
kw:capleak
kw:captcha
kw:cast
kw:centos
kw:cffi
kw:chacha
kw:charset
kw:check
kw:checker
kw:chroot
kw:ci
kw:clean
kw:cleanup
kw:cli
kw:cloud
kw:cloud-backend
kw:cmdline
kw:code
kw:code-checks
kw:coding-standards
kw:coding-tools
kw:coding_tools
kw:collection
kw:compatibility
kw:completion
kw:compression
kw:confidentiality
kw:config
kw:configuration
kw:configuration.txt
kw:conflict
kw:connection
kw:connectivity
kw:consistency
kw:content
kw:control
kw:control.furl
kw:convergence
kw:coordination
kw:copyright
kw:corruption
kw:cors
kw:cost
kw:coverage
kw:coveralls
kw:coveralls.io
kw:cpu-watcher
kw:cpyext
kw:crash
kw:crawler
kw:crawlers
kw:create-container
kw:cruft
kw:crypto
kw:cryptography
kw:cryptography-lib
kw:cryptopp
kw:csp
kw:curl
kw:cutoff-date
kw:cycle
kw:cygwin
kw:d3
kw:daemon
kw:darcs
kw:darcsver
kw:database
kw:dataloss
kw:db
kw:dead-code
kw:deb
kw:debian
kw:debug
kw:deep-check
kw:defaults
kw:deferred
kw:delete
kw:deletion
kw:denial-of-service
kw:dependency
kw:deployment
kw:deprecation
kw:desert-island
kw:desert-island-build
kw:design
kw:design-review-needed
kw:detection
kw:dev-infrastructure
kw:devpay
kw:directory
kw:directory-page
kw:dirnode
kw:dirnodes
kw:disconnect
kw:discovery
kw:disk
kw:disk-backend
kw:distribute
kw:distutils
kw:dns
kw:do_http
kw:doc-needed
kw:docker
kw:docs
kw:docs-needed
kw:dokan
kw:dos
kw:download
kw:downloader
kw:dragonfly
kw:drop-upload
kw:duplicity
kw:dusty
kw:earth-dragon
kw:easy
kw:ec2
kw:ecdsa
kw:ed25519
kw:egg-needed
kw:eggs
kw:eliot
kw:email
kw:empty
kw:encoding
kw:endpoint
kw:enterprise
kw:enum34
kw:environment
kw:erasure
kw:erasure-coding
kw:error
kw:escaping
kw:etag
kw:etch
kw:evangelism
kw:eventual
kw:example
kw:excess-authority
kw:exec
kw:exocet
kw:expiration
kw:extensibility
kw:extension
kw:failure
kw:fedora
kw:ffp
kw:fhs
kw:figleaf
kw:file
kw:file-descriptor
kw:filename
kw:filesystem
kw:fileutil
kw:fips
kw:firewall
kw:first
kw:floatingpoint
kw:flog
kw:foolscap
kw:forward-compatibility
kw:forward-secrecy
kw:forwarding
kw:free
kw:freebsd
kw:frontend
kw:fsevents
kw:ftp
kw:ftpd
kw:full
kw:furl
kw:fuse
kw:garbage
kw:garbage-collection
kw:gateway
kw:gatherer
kw:gc
kw:gcc
kw:gentoo
kw:get
kw:git
kw:git-annex
kw:github
kw:glacier
kw:globalcaps
kw:glossary
kw:google-cloud-storage
kw:google-drive-backend
kw:gossip
kw:governance
kw:grid
kw:grid-manager
kw:gridid
kw:gridsync
kw:grsec
kw:gsoc
kw:gvfs
kw:hackfest
kw:hacktahoe
kw:hang
kw:hardlink
kw:heartbleed
kw:heisenbug
kw:help
kw:helper
kw:hint
kw:hooks
kw:how
kw:how-to
kw:howto
kw:hp
kw:hp-cloud
kw:html
kw:http
kw:https
kw:i18n
kw:i2p
kw:i2p-collab
kw:illustration
kw:image
kw:immutable
kw:impressions
kw:incentives
kw:incident
kw:init
kw:inlineCallbacks
kw:inotify
kw:install
kw:installer
kw:integration
kw:integration-test
kw:integrity
kw:interactive
kw:interface
kw:interfaces
kw:interoperability
kw:interstellar-exploration
kw:introducer
kw:introduction
kw:iphone
kw:ipkg
kw:iputil
kw:ipv6
kw:irc
kw:jail
kw:javascript
kw:joke
kw:jquery
kw:json
kw:jsui
kw:junk
kw:key-value-store
kw:kfreebsd
kw:known-issue
kw:konqueror
kw:kpreid
kw:kvm
kw:l10n
kw:lae
kw:large
kw:latency
kw:leak
kw:leasedb
kw:leases
kw:libgmp
kw:license
kw:licenss
kw:linecount
kw:link
kw:linux
kw:lit
kw:localhost
kw:location
kw:locking
kw:logging
kw:logo
kw:loopback
kw:lucid
kw:mac
kw:macintosh
kw:magic-folder
kw:manhole
kw:manifest
kw:manual-test-needed
kw:map
kw:mapupdate
kw:max_space
kw:mdmf
kw:memcheck
kw:memory
kw:memory-leak
kw:mesh
kw:metadata
kw:meter
kw:migration
kw:mime
kw:mingw
kw:minimal
kw:misc
kw:miscapture
kw:mlp
kw:mock
kw:more-info-needed
kw:mountain-lion
kw:move
kw:multi-users
kw:multiple
kw:multiuser-gateway
kw:munin
kw:music
kw:mutability
kw:mutable
kw:mystery
kw:names
kw:naming
kw:nas
kw:navigation
kw:needs-review
kw:needs-spawn
kw:netbsd
kw:network
kw:nevow
kw:new-user
kw:newcaps
kw:news
kw:news-done
kw:news-needed
kw:newsletter
kw:newurls
kw:nfc
kw:nginx
kw:nixos
kw:no-clobber
kw:node
kw:node-url
kw:notification
kw:notifyOnDisconnect
kw:nsa310
kw:nsa320
kw:nsa325
kw:numpy
kw:objects
kw:old
kw:openbsd
kw:openitp-packaging
kw:openssl
kw:openstack
kw:opensuse
kw:operation-helpers
kw:operational
kw:operations
kw:ophandle
kw:ophandles
kw:ops
kw:optimization
kw:optional
kw:options
kw:organization
kw:os
kw:os.abort
kw:ostrom
kw:osx
kw:osxfuse
kw:otf-magic-folder-objective1
kw:otf-magic-folder-objective2
kw:otf-magic-folder-objective3
kw:otf-magic-folder-objective4
kw:otf-magic-folder-objective5
kw:otf-magic-folder-objective6
kw:p2p
kw:packaging
kw:partial
kw:password
kw:path
kw:paths
kw:pause
kw:peer-selection
kw:performance
kw:permalink
kw:permissions
kw:persistence
kw:phone
kw:pickle
kw:pip
kw:pipermail
kw:pkg_resources
kw:placement
kw:planning
kw:policy
kw:port
kw:portability
kw:portal
kw:posthook
kw:pratchett
kw:preformance
kw:preservation
kw:privacy
kw:process
kw:profile
kw:profiling
kw:progress
kw:proxy
kw:publish
kw:pyOpenSSL
kw:pyasn1
kw:pycparser
kw:pycrypto
kw:pycrypto-lib
kw:pycryptopp
kw:pyfilesystem
kw:pyflakes
kw:pylint
kw:pypi
kw:pypy
kw:pysqlite
kw:python
kw:python3
kw:pythonpath
kw:pyutil
kw:pywin32
kw:quickstart
kw:quiet
kw:quotas
kw:quoting
kw:raic
kw:rainhill
kw:random
kw:random-access
kw:range
kw:raspberry-pi
kw:reactor
kw:readonly
kw:rebalancing
kw:recovery
kw:recursive
kw:redhat
kw:redirect
kw:redressing
kw:refactor
kw:referer
kw:referrer
kw:regression
kw:rekey
kw:relay
kw:release
kw:release-blocker
kw:reliability
kw:relnotes
kw:remote
kw:removable
kw:removable-disk
kw:rename
kw:renew
kw:repair
kw:replace
kw:report
kw:repository
kw:research
kw:reserved_space
kw:response-needed
kw:response-time
kw:restore
kw:retrieve
kw:retry
kw:review
kw:review-needed
kw:reviewed
kw:revocation
kw:roadmap
kw:rollback
kw:rpm
kw:rsa
kw:rss
kw:rst
kw:rsync
kw:rusty
kw:s3
kw:s3-backend
kw:s3-frontend
kw:s4
kw:same-origin
kw:sandbox
kw:scalability
kw:scaling
kw:scheduling
kw:schema
kw:scheme
kw:scp
kw:scripts
kw:sdist
kw:sdmf
kw:security
kw:self-contained
kw:server
kw:servermap
kw:servers-of-happiness
kw:service
kw:setup
kw:setup.py
kw:setup_requires
kw:setuptools
kw:setuptools_darcs
kw:sftp
kw:shared
kw:shareset
kw:shell
kw:signals
kw:simultaneous
kw:six
kw:size
kw:slackware
kw:slashes
kw:smb
kw:sneakernet
kw:snowleopard
kw:socket
kw:solaris
kw:space
kw:space-efficiency
kw:spam
kw:spec
kw:speed
kw:sqlite
kw:ssh
kw:ssh-keygen
kw:sshfs
kw:ssl
kw:stability
kw:standards
kw:start
kw:startup
kw:static
kw:static-analysis
kw:statistics
kw:stats
kw:stats_gatherer
kw:status
kw:stdeb
kw:storage
kw:streaming
kw:strports
kw:style
kw:stylesheet
kw:subprocess
kw:sumo
kw:survey
kw:svg
kw:symlink
kw:synchronous
kw:tac
kw:tahoe-*
kw:tahoe-add-alias
kw:tahoe-admin
kw:tahoe-archive
kw:tahoe-backup
kw:tahoe-check
kw:tahoe-cp
kw:tahoe-create-alias
kw:tahoe-create-introducer
kw:tahoe-debug
kw:tahoe-deep-check
kw:tahoe-deepcheck
kw:tahoe-lafs-trac-stream
kw:tahoe-list-aliases
kw:tahoe-ls
kw:tahoe-magic-folder
kw:tahoe-manifest
kw:tahoe-mkdir
kw:tahoe-mount
kw:tahoe-mv
kw:tahoe-put
kw:tahoe-restart
kw:tahoe-rm
kw:tahoe-run
kw:tahoe-start
kw:tahoe-stats
kw:tahoe-unlink
kw:tahoe-webopen
kw:tahoe.css
kw:tahoe_files
kw:tahoewapi
kw:tarball
kw:tarballs
kw:tempfile
kw:templates
kw:terminology
kw:test
kw:test-and-set
kw:test-from-egg
kw:test-needed
kw:testgrid
kw:testing
kw:tests
kw:throttling
kw:ticket999-s3-backend
kw:tiddly
kw:time
kw:timeout
kw:timing
kw:to
kw:to-be-closed-on-2011-08-01
kw:tor
kw:tor-protocol
kw:torsocks
kw:tox
kw:trac
kw:transparency
kw:travis
kw:travis-ci
kw:trial
kw:trickle
kw:trivial
kw:truckee
kw:tub
kw:tub.location
kw:twine
kw:twistd
kw:twistd.log
kw:twisted
kw:twisted-14
kw:twisted-trial
kw:twitter
kw:twn
kw:txaws
kw:type
kw:typeerror
kw:ubuntu
kw:ucwe
kw:ueb
kw:ui
kw:unclean
kw:uncoordinated-writes
kw:undeletable
kw:unfinished-business
kw:unhandled-error
kw:unhappy
kw:unicode
kw:unit
kw:unix
kw:unlink
kw:update
kw:upgrade
kw:upload
kw:upload-helper
kw:uri
kw:url
kw:usability
kw:use-case
kw:utf-8
kw:util
kw:uwsgi
kw:ux
kw:validation
kw:variables
kw:vdrive
kw:verify
kw:verlib
kw:version
kw:versioning
kw:versions
kw:video
kw:virtualbox
kw:virtualenv
kw:vista
kw:visualization
kw:visualizer
kw:vm
kw:volunteergrid2
kw:volunteers
kw:vpn
kw:wapi
kw:warners-opinion-needed
kw:warning
kw:weapi
kw:web
kw:web.port
kw:webapi
kw:webdav
kw:webdrive
kw:webport
kw:websec
kw:website
kw:websocket
kw:welcome
kw:welcome-page
kw:welcomepage
kw:wiki
kw:win32
kw:win64
kw:windows
kw:windows-related
kw:winscp
kw:workaround
kw:world-domination
kw:wrapper
kw:write-enabler
kw:wui
kw:x86
kw:x86-64
kw:xhtml
kw:xml
kw:xss
kw:zbase32
kw:zetuptoolz
kw:zfec
kw:zookos-opinion-needed
kw:zope
kw:zope.interface
p/blocker
p/critical
p/major
p/minor
p/normal
p/supercritical
p/trivial
r/cannot reproduce
r/duplicate
r/fixed
r/invalid
r/somebody else's problem
r/was already fixed
r/wontfix
r/worksforme
t/defect
t/enhancement
t/task
v/0.2.0
v/0.3.0
v/0.4.0
v/0.5.0
v/0.5.1
v/0.6.0
v/0.6.1
v/0.7.0
v/0.8.0
v/0.9.0
v/1.0.0
v/1.1.0
v/1.10.0
v/1.10.1
v/1.10.2
v/1.10a2
v/1.11.0
v/1.12.0
v/1.12.1
v/1.13.0
v/1.14.0
v/1.15.0
v/1.15.1
v/1.2.0
v/1.3.0
v/1.4.1
v/1.5.0
v/1.6.0
v/1.6.1
v/1.7.0
v/1.7.1
v/1.7β
v/1.8.0
v/1.8.1
v/1.8.2
v/1.8.3
v/1.8β
v/1.9.0
v/1.9.0-s3branch
v/1.9.0a1
v/1.9.0a2
v/1.9.0b1
v/1.9.1
v/1.9.2
v/1.9.2a1
v/cloud-branch
v/unknown
No milestone
No project
No assignees
5 participants
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: tahoe-lafs/trac#217
No description provided.