Allow Tahoe filesystem to be run over a different key-value-store / DHT implementation #869

Open
opened 2009-12-20 23:26:52 +00:00 by daira · 5 comments

source:docs/architecture.rst describes Tahoe as comprising three layers: key-value store, filesystem, and application.

Most of what makes Tahoe different from other systems is in the filesystem layer -- the layer that implements a cryptographic capability filesystem. The key-value store layer implements (a little bit more than) a Distributed Hash Table, which is a fairly well-understood primitive with many implementations. The Tahoe filesystem and applications could in principle run on a different DHT, and it would still behave like Tahoe -- with different (perhaps better, depending on the DHT) scalability, performance, and availability properties, but with confidentiality and integrity ensured by Tahoe without relying on the DHT servers.

However, there are some obstacles to running the Tahoe filesystem layer on another DHT:

  • The code isn't strictly factored into layers (even though most code files belong mainly to one layer), so there isn't a narrow API between the key-value store and filesystem-related abstractions.
  • The communication with servers currently needs to be encrypted (independently of the share encryption), and other DHTs probably wouldn't support that.
  • Because the filesystem has only been used with one key-value store layer up to now, it may make assumptions about that layer that haven't been clearly documented.

Note that even if the Tahoe code was strictly layered, we should still expect there to be some significant effort to port Tahoe to a particular DHT. The DHT servers would probably have to run some Tahoe code in order to verify shares, for example.

source:docs/architecture.rst describes Tahoe as comprising three layers: **key-value store**, **filesystem**, and **application**. Most of what makes Tahoe different from other systems is in the filesystem layer -- the layer that implements a cryptographic capability filesystem. The key-value store layer implements (a little bit more than) a Distributed Hash Table, which is a fairly well-understood primitive with many implementations. The Tahoe filesystem and applications could in principle run on a different DHT, and it would still behave like Tahoe -- with different (perhaps better, depending on the DHT) scalability, performance, and availability properties, but with confidentiality and integrity ensured by Tahoe without relying on the DHT servers. However, there are some obstacles to running the Tahoe filesystem layer on another DHT: * The code isn't strictly factored into layers (even though most code files belong mainly to one layer), so there isn't a narrow API between the key-value store and filesystem-related abstractions. * The communication with servers currently needs to be encrypted (independently of the share encryption), and other DHTs probably wouldn't support that. * Because the filesystem has only been used with one key-value store layer up to now, it may make assumptions about that layer that haven't been clearly documented. Note that even if the Tahoe code was strictly layered, we should still expect there to be some significant effort to port Tahoe to a particular DHT. The DHT servers would probably have to run some Tahoe code in order to verify shares, for example.
daira added the
c/unknown
p/major
t/enhancement
v/1.5.0
labels 2009-12-20 23:26:52 +00:00
daira added this to the undecided milestone 2009-12-20 23:26:52 +00:00

Hmm, good points. This ties in closely to the docs outline that we wrote up
(but which we haven't finished by writing the actual documentation it calls
for): source:docs/specifications/outline.rst .

As you note, there are several abstraction-layer leaks which would need to be
plugged or accomodated to switch to a general-purpose DHT for the bottom-most
layer. Here are a few thoughts.

  • the main special feature that we require of the bottom-most DHT layer is
    support for mutable files. All of the immutable-file stuff is fairly
    standard DHT material. But to implement Tahoe's mutable files, we need a
    distributed slot primitive with capability-based access control: creating
    a slot should return separate read- and write- caps, and there should be
    some means of repairing shares without being able to forge new contents.
  • the only need for encrypted server connections is to support the
    shared-secret used to manage mutable-slot access control (which we'd like
    to get rid of anyways, because it makes share-migration harder, and it
    makes repair-from-readcap harder). If we had a different mechanism, e.g.
    representing slot-modify authority with a separate ECDSA private key per
    server*slot, then we could probably drop this requirement. (there is some
    work to do w.r.t. replay attacks and building a suitable protocol with
    which to prove knowledge of the private key, but these are well-understood
    problems).
  • on the other hand, the shared-secret slot-modify authority is nice and
    simple, is fast and easy for the server to verify (meaning a slow server
    can still handle lots of traffic), and doesn't require the server to have
    detailed knowledge of the share layout (which decouples server version
    from client version). Most of the schemes we've considered for
    signed-message slot-modify operations require the servers to verify the
    proposed new slot contents thoroughly, making it harder to deploy new
    share types without simultaneously upgrading all the servers.

There might also be some better ways of describing Tahoe's nominal layers, in
a sense refactoring the description or shuffling around the dotted lines.
I've been trying to write up a presentation using the following arrangement:

  • We could say that the lowermost layer is responsible for providing
    availability, reliability, and integrity: this layer has all the
    distributed stuff, erasure coding, and hashes to guard against corrupted
    shares, but you could replace it with a simple local lookup table if you
    didn't care about that sort of thing. This layer provides a pair of
    immutable operations (key=put(data) and data=get(key)), and a triple of
    mutable operations (writecap,readcap=create(), put(writecap,data),
    data=get(readcap)). The check/verify/repair operations work entirely at
    this level. All of the 'data' at this layer is ciphertext.
  • The next layer up gets you plaintext: the immutable operations are
    key=f(readcap), ciphertext=encrypt(key, plaintext), and
    plaintext=decrypt(key, ciphertext). The mutable operations are the same,
    plus something to give you the writecap-accessible-only column of a
    dirnode. If you didn't care about confidentiality, you could make these
    NOPs.
  • The layer above that gets you directories, and is mostly about serializing
    the childname->childcap+metadata table into a mutable slot (or immutable
    file). If you have some other mechanism to manage your filecaps, you could
    ignore this layer.
  • The layer above that provides some sort of API to non-Tahoe code, making
    all of the other layers accessible by somewhere. This presents operations
    like data=get(readcap), children=read(dircap), etc.

One way to look at Tahoe is in terms of that top-most API: you don't care
what it does, you just need to know about filecaps and dircaps. Another view
is about some client code, the API, the gateway node, and the servers that
the gateway connects to: this diagram would show different sorts of message
traversing the different connections. A third view would abstract the servers
and the DHT/erasure-coding stuff into a lookup table, and focus on the
crypto-and-above layers.

Hmm, good points. This ties in closely to the docs outline that we wrote up (but which we haven't finished by writing the actual documentation it calls for): source:docs/specifications/outline.rst . As you note, there are several abstraction-layer leaks which would need to be plugged or accomodated to switch to a general-purpose DHT for the bottom-most layer. Here are a few thoughts. * the main special feature that we require of the bottom-most DHT layer is support for mutable files. All of the *immutable*-file stuff is fairly standard DHT material. But to implement Tahoe's mutable files, we need a distributed slot primitive with capability-based access control: creating a slot should return separate read- and write- caps, and there should be some means of repairing shares without being able to forge new contents. * the only need for encrypted server connections is to support the shared-secret used to manage mutable-slot access control (which we'd like to get rid of anyways, because it makes share-migration harder, and it makes repair-from-readcap harder). If we had a different mechanism, e.g. representing slot-modify authority with a separate ECDSA private key per server*slot, then we could probably drop this requirement. (there is some work to do w.r.t. replay attacks and building a suitable protocol with which to prove knowledge of the private key, but these are well-understood problems). * on the other hand, the shared-secret slot-modify authority is nice and simple, is fast and easy for the server to verify (meaning a slow server can still handle lots of traffic), and doesn't require the server to have detailed knowledge of the share layout (which decouples server version from client version). Most of the schemes we've considered for signed-message slot-modify operations require the servers to verify the proposed new slot contents thoroughly, making it harder to deploy new share types without simultaneously upgrading all the servers. There might also be some better ways of describing Tahoe's nominal layers, in a sense refactoring the description or shuffling around the dotted lines. I've been trying to write up a presentation using the following arrangement: * We could say that the lowermost layer is responsible for providing availability, reliability, and integrity: this layer has all the distributed stuff, erasure coding, and hashes to guard against corrupted shares, but you could replace it with a simple local lookup table if you didn't care about that sort of thing. This layer provides a pair of immutable operations (key=put(data) and data=get(key)), and a triple of mutable operations (writecap,readcap=create(), put(writecap,data), data=get(readcap)). The check/verify/repair operations work entirely at this level. All of the 'data' at this layer is ciphertext. * The next layer up gets you plaintext: the immutable operations are key=f(readcap), ciphertext=encrypt(key, plaintext), and plaintext=decrypt(key, ciphertext). The mutable operations are the same, plus something to give you the writecap-accessible-only column of a dirnode. If you didn't care about confidentiality, you could make these NOPs. * The layer above that gets you directories, and is mostly about serializing the childname->childcap+metadata table into a mutable slot (or immutable file). If you have some other mechanism to manage your filecaps, you could ignore this layer. * The layer above that provides some sort of API to non-Tahoe code, making all of the other layers accessible by somewhere. This presents operations like data=get(readcap), children=read(dircap), etc. One way to look at Tahoe is in terms of that top-most API: you don't care what it does, you just need to know about filecaps and dircaps. Another view is about some client code, the API, the gateway node, and the servers that the gateway connects to: this diagram would show different sorts of message traversing the different connections. A third view would abstract the servers and the DHT/erasure-coding stuff into a lookup table, and focus on the crypto-and-above layers.
Author

The "grid layer" is now called the "key-value store layer".

The "grid layer" is now called the "key-value store layer".
daira changed title from Allow Tahoe filesystem to be run over a different grid/DHT implementation to Allow Tahoe filesystem to be run over a different key-value-store / DHT implementation 2010-01-20 07:10:09 +00:00
Author

Other DHTs might have better anti-censorship properties.

Other DHTs might have better anti-censorship properties.
Author

Replying to warner:

... This ties in closely to the docs outline that we wrote up
(but which we haven't finished by writing the actual documentation it calls
for): source:docs/specifications/outline.txt .

Now source:docs/specifications/outline.rst.

  • on the other hand, the shared-secret slot-modify authority is nice and
    simple, is fast and easy for the server to verify (meaning a slow server
    can still handle lots of traffic), and doesn't require the server to have
    detailed knowledge of the share layout (which decouples server version
    from client version). Most of the schemes we've considered for
    signed-message slot-modify operations require the servers to verify the
    proposed new slot contents thoroughly, making it harder to deploy new
    share types without simultaneously upgrading all the servers.

As far as performance is concerned, signature verification is fast with RSA, ECDSA or hash-based signatures (and the hashing can be done incrementally as the share is received, so no significant increase in latency). I don't think this is likely to be a performance bottleneck.

The compatibility impact of changes in the mutable share format would be that an older server is not able to accept mutable shares of the newer version from a newer client. The newer client can still store shares of the older version on that server. Grids with a mixture of server and client versions (and old shares) will still work, subject to that limitation.

On the other hand, suppose that the reason for the change is migration to a new signing algorithm to fix a security flaw. In that case, a given client can't expect any improvements in security until all servers have upgraded, then all shares are migrated to the new format (probably as part of rebalancing), then that client has been upgraded to stop accepting the old format. Relative to the current scheme where servers don't need to be upgraded because they are unaware of the signing algorithm, there is indeed a significant disadvantage. At least the grid can continue operating through the upgrade, though.

The initial switch from write-enablers to share verification also requires upgrading all servers on a grid -- but if you're doing this to support a different DHT, then that would have to be effectively a new grid, which would just start with servers of the required version. The same caps could potentially be kept when migrating files from one grid to another, as long as the cap format has not changed incompatibly.

Replying to [warner](/tahoe-lafs/trac/issues/869#issuecomment-374988): > ... This ties in closely to the docs outline that we wrote up > (but which we haven't finished by writing the actual documentation it calls > for): source:docs/specifications/outline.txt . Now source:docs/specifications/outline.rst. > * on the other hand, the shared-secret slot-modify authority is nice and > simple, is fast and easy for the server to verify (meaning a slow server > can still handle lots of traffic), and doesn't require the server to have > detailed knowledge of the share layout (which decouples server version > from client version). Most of the schemes we've considered for > signed-message slot-modify operations require the servers to verify the > proposed new slot contents thoroughly, making it harder to deploy new > share types without simultaneously upgrading all the servers. As far as performance is concerned, signature *verification* is fast with RSA, ECDSA or hash-based signatures (and the hashing can be done incrementally as the share is received, so no significant increase in latency). I don't think this is likely to be a performance bottleneck. The compatibility impact of changes in the mutable share format would be that an older server is not able to accept mutable shares of the newer version from a newer client. The newer client can still store shares of the older version on that server. Grids with a mixture of server and client versions (and old shares) will still work, subject to that limitation. On the other hand, suppose that the reason for the change is migration to a new signing algorithm to fix a security flaw. In that case, a given client can't expect any improvements in security until all servers have upgraded, then all shares are migrated to the new format (probably as part of rebalancing), then that client has been upgraded to stop accepting the old format. Relative to the current scheme where servers don't need to be upgraded because they are unaware of the signing algorithm, there is indeed a significant disadvantage. At least the grid can continue operating through the upgrade, though. The initial switch from write-enablers to share verification also requires upgrading all servers on a grid -- but if you're doing this to support a different DHT, then that would have to be effectively a new grid, which would just start with servers of the required version. The same caps could potentially be kept when migrating files from one grid to another, as long as the cap format has not changed incompatibly.

Replying to [davidsarah]comment:6:

As far as performance is concerned, signature verification is fast with
RSA, ECDSA or hash-based signatures (and the hashing can be done
incrementally as the share is received, so no significant increase in
latency). I don't think this is likely to be a performance bottleneck.

I'd want to test this with the lowliest of our potential storage servers:
embedded NAS devices like Pogo-Plugs and !OpenWRT boxes with USB drives
attached (like Francois' super-slow ARM buildslave). Moving from Foolscap to
HTTP would help these boxes (which find SSL challenging), and doing less work
per share would help. Ideally, we'd be able to saturate the disk bandwidth
without maxing out the CPU.

Also, one of our selling points is that the storage server is low-impact: we
want to encourage folks on desktops to share their disk space without
worrying about their other applications running slowly. I agree that it might
not be a big bottleneck, but let's just keep in mind that our target is lower
than 100% CPU consumption.

Incremental hashing will require forethought in the CHK share-layout and in
the write protocol (the order in which we send out share bits): there are
plenty of ways to screw it up. Mutable files are harder (you're updating an
existing merkle tree, reading in modified segments, applying deltas,
rehashing, testing, then committing to disk). The simplest approach would
involve writing a whole new proposed share, doing integrity checks, then
replacing the old one.

The compatibility impact of changes in the mutable share format would be
that an older server is not able to accept mutable shares of the newer
version from a newer client. The newer client can still store shares of the
older version on that server. Grids with a mixture of server and client
versions (and old shares) will still work, subject to that limitation.

Hm, I think I'm assuming that a new share format really means a new encoding
protocol, so everything about the share is different, and the filecaps
necessarily change. It wouldn't be possible to produce both "old" and "new"
shares for a single file. In that case, clients faced with older servers
either have to reencode the file (and change the filecap, and find everywhere
the old cap was used and replace it), or reduce diversity (you can only store
shares on new servers).

Migrating existing files to the new format can't be done in a simple
rebalancing pass (in which you'd only see ciphertext); you'd need something
closer to a cp -r.

My big concern is that this would slow adoption of new formats like MDMF.
Since servers should advertise the formats they can understand, I can imagine
a control panel that shows me grid/server-status on a per-format basis: "if
you upload an SDMF file, you can use servers A/B/C/D, but if you upload MDMF,
you can only use servers B/C". Clients would need to watch the control panel
and not update their config to start using e.g. MDMF until enough servers
were capable to provide reasonable diversity: not exactly a flag day, but not
a painless upgrade either.

On the other hand, suppose that the reason for the change is migration to a
new signing algorithm to fix a security flaw. In that case, a given client
can't expect any improvements in security until all servers have upgraded,

Incidentally, the security vulnerability induced by such a flaw would be
limited to availability (and possibly rollback), since that's all the server
can threaten anyways. In this scenario, a non-writecap-holding attacker might
be able to convince the server to modify a share in some invalid way, which
will either result in a (detected) integrity failure or worst-case a
rollback. Anyways, it probably wouldn't be a fire-drill.

Replying to [davidsarah]comment:6: > > As far as performance is concerned, signature *verification* is fast with > RSA, ECDSA or hash-based signatures (and the hashing can be done > incrementally as the share is received, so no significant increase in > latency). I don't think this is likely to be a performance bottleneck. I'd want to test this with the lowliest of our potential storage servers: embedded NAS devices like Pogo-Plugs and !OpenWRT boxes with USB drives attached (like Francois' super-slow ARM buildslave). Moving from Foolscap to HTTP would help these boxes (which find SSL challenging), and doing less work per share would help. Ideally, we'd be able to saturate the disk bandwidth without maxing out the CPU. Also, one of our selling points is that the storage server is low-impact: we want to encourage folks on desktops to share their disk space without worrying about their other applications running slowly. I agree that it might not be a big bottleneck, but let's just keep in mind that our target is lower than 100% CPU consumption. Incremental hashing will require forethought in the CHK share-layout and in the write protocol (the order in which we send out share bits): there are plenty of ways to screw it up. Mutable files are harder (you're updating an existing merkle tree, reading in modified segments, applying deltas, rehashing, testing, then committing to disk). The simplest approach would involve writing a whole new proposed share, doing integrity checks, then replacing the old one. > The compatibility impact of changes in the mutable share format would be > that an older server is not able to accept mutable shares of the newer > version from a newer client. The newer client can still store shares of the > older version on that server. Grids with a mixture of server and client > versions (and old shares) will still work, subject to that limitation. Hm, I think I'm assuming that a new share format really means a new encoding protocol, so everything about the share is different, and the filecaps necessarily change. It wouldn't be possible to produce both "old" and "new" shares for a single file. In that case, clients faced with older servers either have to reencode the file (and change the filecap, and find everywhere the old cap was used and replace it), or reduce diversity (you can only store shares on new servers). Migrating existing files to the new format can't be done in a simple rebalancing pass (in which you'd only see ciphertext); you'd need something closer to a `cp -r`. My big concern is that this would slow adoption of new formats like MDMF. Since servers should advertise the formats they can understand, I can imagine a control panel that shows me grid/server-status on a per-format basis: "if you upload an SDMF file, you can use servers A/B/C/D, but if you upload MDMF, you can only use servers B/C". Clients would need to watch the control panel and not update their config to start using e.g. MDMF until enough servers were capable to provide reasonable diversity: not exactly a flag day, but not a painless upgrade either. > On the other hand, suppose that the reason for the change is migration to a > new signing algorithm to fix a security flaw. In that case, a given client > can't expect any improvements in security until all servers have upgraded, Incidentally, the security vulnerability induced by such a flaw would be limited to availability (and possibly rollback), since that's all the server can threaten anyways. In this scenario, a non-writecap-holding attacker might be able to convince the server to modify a share in some invalid way, which will either result in a (detected) integrity failure or worst-case a rollback. Anyways, it probably wouldn't be a fire-drill.
warner added
c/code-network
and removed
c/unknown
labels 2014-09-11 22:18:51 +00:00
Sign in to join this conversation.
No labels
c/code
c/code-dirnodes
c/code-encoding
c/code-frontend
c/code-frontend-cli
c/code-frontend-ftp-sftp
c/code-frontend-magic-folder
c/code-frontend-web
c/code-mutable
c/code-network
c/code-nodeadmin
c/code-peerselection
c/code-storage
c/contrib
c/dev-infrastructure
c/docs
c/operational
c/packaging
c/unknown
c/website
kw:2pc
kw:410
kw:9p
kw:ActivePerl
kw:AttributeError
kw:DataUnavailable
kw:DeadReferenceError
kw:DoS
kw:FileZilla
kw:GetLastError
kw:IFinishableConsumer
kw:K
kw:LeastAuthority
kw:Makefile
kw:RIStorageServer
kw:StringIO
kw:UncoordinatedWriteError
kw:about
kw:access
kw:access-control
kw:accessibility
kw:accounting
kw:accounting-crawler
kw:add-only
kw:aes
kw:aesthetics
kw:alias
kw:aliases
kw:aliens
kw:allmydata
kw:amazon
kw:ambient
kw:annotations
kw:anonymity
kw:anonymous
kw:anti-censorship
kw:api_auth_token
kw:appearance
kw:appname
kw:apport
kw:archive
kw:archlinux
kw:argparse
kw:arm
kw:assertion
kw:attachment
kw:auth
kw:authentication
kw:automation
kw:avahi
kw:availability
kw:aws
kw:azure
kw:backend
kw:backoff
kw:backup
kw:backupdb
kw:backward-compatibility
kw:bandwidth
kw:basedir
kw:bayes
kw:bbfreeze
kw:beta
kw:binaries
kw:binutils
kw:bitcoin
kw:bitrot
kw:blacklist
kw:blocker
kw:blocks-cloud-deployment
kw:blocks-cloud-merge
kw:blocks-magic-folder-merge
kw:blocks-merge
kw:blocks-raic
kw:blocks-release
kw:blog
kw:bom
kw:bonjour
kw:branch
kw:branding
kw:breadcrumbs
kw:brians-opinion-needed
kw:browser
kw:bsd
kw:build
kw:build-helpers
kw:buildbot
kw:builders
kw:buildslave
kw:buildslaves
kw:cache
kw:cap
kw:capleak
kw:captcha
kw:cast
kw:centos
kw:cffi
kw:chacha
kw:charset
kw:check
kw:checker
kw:chroot
kw:ci
kw:clean
kw:cleanup
kw:cli
kw:cloud
kw:cloud-backend
kw:cmdline
kw:code
kw:code-checks
kw:coding-standards
kw:coding-tools
kw:coding_tools
kw:collection
kw:compatibility
kw:completion
kw:compression
kw:confidentiality
kw:config
kw:configuration
kw:configuration.txt
kw:conflict
kw:connection
kw:connectivity
kw:consistency
kw:content
kw:control
kw:control.furl
kw:convergence
kw:coordination
kw:copyright
kw:corruption
kw:cors
kw:cost
kw:coverage
kw:coveralls
kw:coveralls.io
kw:cpu-watcher
kw:cpyext
kw:crash
kw:crawler
kw:crawlers
kw:create-container
kw:cruft
kw:crypto
kw:cryptography
kw:cryptography-lib
kw:cryptopp
kw:csp
kw:curl
kw:cutoff-date
kw:cycle
kw:cygwin
kw:d3
kw:daemon
kw:darcs
kw:darcsver
kw:database
kw:dataloss
kw:db
kw:dead-code
kw:deb
kw:debian
kw:debug
kw:deep-check
kw:defaults
kw:deferred
kw:delete
kw:deletion
kw:denial-of-service
kw:dependency
kw:deployment
kw:deprecation
kw:desert-island
kw:desert-island-build
kw:design
kw:design-review-needed
kw:detection
kw:dev-infrastructure
kw:devpay
kw:directory
kw:directory-page
kw:dirnode
kw:dirnodes
kw:disconnect
kw:discovery
kw:disk
kw:disk-backend
kw:distribute
kw:distutils
kw:dns
kw:do_http
kw:doc-needed
kw:docker
kw:docs
kw:docs-needed
kw:dokan
kw:dos
kw:download
kw:downloader
kw:dragonfly
kw:drop-upload
kw:duplicity
kw:dusty
kw:earth-dragon
kw:easy
kw:ec2
kw:ecdsa
kw:ed25519
kw:egg-needed
kw:eggs
kw:eliot
kw:email
kw:empty
kw:encoding
kw:endpoint
kw:enterprise
kw:enum34
kw:environment
kw:erasure
kw:erasure-coding
kw:error
kw:escaping
kw:etag
kw:etch
kw:evangelism
kw:eventual
kw:example
kw:excess-authority
kw:exec
kw:exocet
kw:expiration
kw:extensibility
kw:extension
kw:failure
kw:fedora
kw:ffp
kw:fhs
kw:figleaf
kw:file
kw:file-descriptor
kw:filename
kw:filesystem
kw:fileutil
kw:fips
kw:firewall
kw:first
kw:floatingpoint
kw:flog
kw:foolscap
kw:forward-compatibility
kw:forward-secrecy
kw:forwarding
kw:free
kw:freebsd
kw:frontend
kw:fsevents
kw:ftp
kw:ftpd
kw:full
kw:furl
kw:fuse
kw:garbage
kw:garbage-collection
kw:gateway
kw:gatherer
kw:gc
kw:gcc
kw:gentoo
kw:get
kw:git
kw:git-annex
kw:github
kw:glacier
kw:globalcaps
kw:glossary
kw:google-cloud-storage
kw:google-drive-backend
kw:gossip
kw:governance
kw:grid
kw:grid-manager
kw:gridid
kw:gridsync
kw:grsec
kw:gsoc
kw:gvfs
kw:hackfest
kw:hacktahoe
kw:hang
kw:hardlink
kw:heartbleed
kw:heisenbug
kw:help
kw:helper
kw:hint
kw:hooks
kw:how
kw:how-to
kw:howto
kw:hp
kw:hp-cloud
kw:html
kw:http
kw:https
kw:i18n
kw:i2p
kw:i2p-collab
kw:illustration
kw:image
kw:immutable
kw:impressions
kw:incentives
kw:incident
kw:init
kw:inlineCallbacks
kw:inotify
kw:install
kw:installer
kw:integration
kw:integration-test
kw:integrity
kw:interactive
kw:interface
kw:interfaces
kw:interoperability
kw:interstellar-exploration
kw:introducer
kw:introduction
kw:iphone
kw:ipkg
kw:iputil
kw:ipv6
kw:irc
kw:jail
kw:javascript
kw:joke
kw:jquery
kw:json
kw:jsui
kw:junk
kw:key-value-store
kw:kfreebsd
kw:known-issue
kw:konqueror
kw:kpreid
kw:kvm
kw:l10n
kw:lae
kw:large
kw:latency
kw:leak
kw:leasedb
kw:leases
kw:libgmp
kw:license
kw:licenss
kw:linecount
kw:link
kw:linux
kw:lit
kw:localhost
kw:location
kw:locking
kw:logging
kw:logo
kw:loopback
kw:lucid
kw:mac
kw:macintosh
kw:magic-folder
kw:manhole
kw:manifest
kw:manual-test-needed
kw:map
kw:mapupdate
kw:max_space
kw:mdmf
kw:memcheck
kw:memory
kw:memory-leak
kw:mesh
kw:metadata
kw:meter
kw:migration
kw:mime
kw:mingw
kw:minimal
kw:misc
kw:miscapture
kw:mlp
kw:mock
kw:more-info-needed
kw:mountain-lion
kw:move
kw:multi-users
kw:multiple
kw:multiuser-gateway
kw:munin
kw:music
kw:mutability
kw:mutable
kw:mystery
kw:names
kw:naming
kw:nas
kw:navigation
kw:needs-review
kw:needs-spawn
kw:netbsd
kw:network
kw:nevow
kw:new-user
kw:newcaps
kw:news
kw:news-done
kw:news-needed
kw:newsletter
kw:newurls
kw:nfc
kw:nginx
kw:nixos
kw:no-clobber
kw:node
kw:node-url
kw:notification
kw:notifyOnDisconnect
kw:nsa310
kw:nsa320
kw:nsa325
kw:numpy
kw:objects
kw:old
kw:openbsd
kw:openitp-packaging
kw:openssl
kw:openstack
kw:opensuse
kw:operation-helpers
kw:operational
kw:operations
kw:ophandle
kw:ophandles
kw:ops
kw:optimization
kw:optional
kw:options
kw:organization
kw:os
kw:os.abort
kw:ostrom
kw:osx
kw:osxfuse
kw:otf-magic-folder-objective1
kw:otf-magic-folder-objective2
kw:otf-magic-folder-objective3
kw:otf-magic-folder-objective4
kw:otf-magic-folder-objective5
kw:otf-magic-folder-objective6
kw:p2p
kw:packaging
kw:partial
kw:password
kw:path
kw:paths
kw:pause
kw:peer-selection
kw:performance
kw:permalink
kw:permissions
kw:persistence
kw:phone
kw:pickle
kw:pip
kw:pipermail
kw:pkg_resources
kw:placement
kw:planning
kw:policy
kw:port
kw:portability
kw:portal
kw:posthook
kw:pratchett
kw:preformance
kw:preservation
kw:privacy
kw:process
kw:profile
kw:profiling
kw:progress
kw:proxy
kw:publish
kw:pyOpenSSL
kw:pyasn1
kw:pycparser
kw:pycrypto
kw:pycrypto-lib
kw:pycryptopp
kw:pyfilesystem
kw:pyflakes
kw:pylint
kw:pypi
kw:pypy
kw:pysqlite
kw:python
kw:python3
kw:pythonpath
kw:pyutil
kw:pywin32
kw:quickstart
kw:quiet
kw:quotas
kw:quoting
kw:raic
kw:rainhill
kw:random
kw:random-access
kw:range
kw:raspberry-pi
kw:reactor
kw:readonly
kw:rebalancing
kw:recovery
kw:recursive
kw:redhat
kw:redirect
kw:redressing
kw:refactor
kw:referer
kw:referrer
kw:regression
kw:rekey
kw:relay
kw:release
kw:release-blocker
kw:reliability
kw:relnotes
kw:remote
kw:removable
kw:removable-disk
kw:rename
kw:renew
kw:repair
kw:replace
kw:report
kw:repository
kw:research
kw:reserved_space
kw:response-needed
kw:response-time
kw:restore
kw:retrieve
kw:retry
kw:review
kw:review-needed
kw:reviewed
kw:revocation
kw:roadmap
kw:rollback
kw:rpm
kw:rsa
kw:rss
kw:rst
kw:rsync
kw:rusty
kw:s3
kw:s3-backend
kw:s3-frontend
kw:s4
kw:same-origin
kw:sandbox
kw:scalability
kw:scaling
kw:scheduling
kw:schema
kw:scheme
kw:scp
kw:scripts
kw:sdist
kw:sdmf
kw:security
kw:self-contained
kw:server
kw:servermap
kw:servers-of-happiness
kw:service
kw:setup
kw:setup.py
kw:setup_requires
kw:setuptools
kw:setuptools_darcs
kw:sftp
kw:shared
kw:shareset
kw:shell
kw:signals
kw:simultaneous
kw:six
kw:size
kw:slackware
kw:slashes
kw:smb
kw:sneakernet
kw:snowleopard
kw:socket
kw:solaris
kw:space
kw:space-efficiency
kw:spam
kw:spec
kw:speed
kw:sqlite
kw:ssh
kw:ssh-keygen
kw:sshfs
kw:ssl
kw:stability
kw:standards
kw:start
kw:startup
kw:static
kw:static-analysis
kw:statistics
kw:stats
kw:stats_gatherer
kw:status
kw:stdeb
kw:storage
kw:streaming
kw:strports
kw:style
kw:stylesheet
kw:subprocess
kw:sumo
kw:survey
kw:svg
kw:symlink
kw:synchronous
kw:tac
kw:tahoe-*
kw:tahoe-add-alias
kw:tahoe-admin
kw:tahoe-archive
kw:tahoe-backup
kw:tahoe-check
kw:tahoe-cp
kw:tahoe-create-alias
kw:tahoe-create-introducer
kw:tahoe-debug
kw:tahoe-deep-check
kw:tahoe-deepcheck
kw:tahoe-lafs-trac-stream
kw:tahoe-list-aliases
kw:tahoe-ls
kw:tahoe-magic-folder
kw:tahoe-manifest
kw:tahoe-mkdir
kw:tahoe-mount
kw:tahoe-mv
kw:tahoe-put
kw:tahoe-restart
kw:tahoe-rm
kw:tahoe-run
kw:tahoe-start
kw:tahoe-stats
kw:tahoe-unlink
kw:tahoe-webopen
kw:tahoe.css
kw:tahoe_files
kw:tahoewapi
kw:tarball
kw:tarballs
kw:tempfile
kw:templates
kw:terminology
kw:test
kw:test-and-set
kw:test-from-egg
kw:test-needed
kw:testgrid
kw:testing
kw:tests
kw:throttling
kw:ticket999-s3-backend
kw:tiddly
kw:time
kw:timeout
kw:timing
kw:to
kw:to-be-closed-on-2011-08-01
kw:tor
kw:tor-protocol
kw:torsocks
kw:tox
kw:trac
kw:transparency
kw:travis
kw:travis-ci
kw:trial
kw:trickle
kw:trivial
kw:truckee
kw:tub
kw:tub.location
kw:twine
kw:twistd
kw:twistd.log
kw:twisted
kw:twisted-14
kw:twisted-trial
kw:twitter
kw:twn
kw:txaws
kw:type
kw:typeerror
kw:ubuntu
kw:ucwe
kw:ueb
kw:ui
kw:unclean
kw:uncoordinated-writes
kw:undeletable
kw:unfinished-business
kw:unhandled-error
kw:unhappy
kw:unicode
kw:unit
kw:unix
kw:unlink
kw:update
kw:upgrade
kw:upload
kw:upload-helper
kw:uri
kw:url
kw:usability
kw:use-case
kw:utf-8
kw:util
kw:uwsgi
kw:ux
kw:validation
kw:variables
kw:vdrive
kw:verify
kw:verlib
kw:version
kw:versioning
kw:versions
kw:video
kw:virtualbox
kw:virtualenv
kw:vista
kw:visualization
kw:visualizer
kw:vm
kw:volunteergrid2
kw:volunteers
kw:vpn
kw:wapi
kw:warners-opinion-needed
kw:warning
kw:weapi
kw:web
kw:web.port
kw:webapi
kw:webdav
kw:webdrive
kw:webport
kw:websec
kw:website
kw:websocket
kw:welcome
kw:welcome-page
kw:welcomepage
kw:wiki
kw:win32
kw:win64
kw:windows
kw:windows-related
kw:winscp
kw:workaround
kw:world-domination
kw:wrapper
kw:write-enabler
kw:wui
kw:x86
kw:x86-64
kw:xhtml
kw:xml
kw:xss
kw:zbase32
kw:zetuptoolz
kw:zfec
kw:zookos-opinion-needed
kw:zope
kw:zope.interface
p/blocker
p/critical
p/major
p/minor
p/normal
p/supercritical
p/trivial
r/cannot reproduce
r/duplicate
r/fixed
r/invalid
r/somebody else's problem
r/was already fixed
r/wontfix
r/worksforme
t/defect
t/enhancement
t/task
v/0.2.0
v/0.3.0
v/0.4.0
v/0.5.0
v/0.5.1
v/0.6.0
v/0.6.1
v/0.7.0
v/0.8.0
v/0.9.0
v/1.0.0
v/1.1.0
v/1.10.0
v/1.10.1
v/1.10.2
v/1.10a2
v/1.11.0
v/1.12.0
v/1.12.1
v/1.13.0
v/1.14.0
v/1.15.0
v/1.15.1
v/1.2.0
v/1.3.0
v/1.4.1
v/1.5.0
v/1.6.0
v/1.6.1
v/1.7.0
v/1.7.1
v/1.7β
v/1.8.0
v/1.8.1
v/1.8.2
v/1.8.3
v/1.8β
v/1.9.0
v/1.9.0-s3branch
v/1.9.0a1
v/1.9.0a2
v/1.9.0b1
v/1.9.1
v/1.9.2
v/1.9.2a1
v/cloud-branch
v/unknown
No milestone
No project
No assignees
2 participants
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: tahoe-lafs/trac#869
No description provided.