padding to hide the size of plaintexts #2018

Open
opened 2013-07-07 22:50:46 +00:00 by zooko · 14 comments

Even though LAFS keeps the contents of files secret from attackers, it currently exposes the length (in bytes) of that content. This can be useful information to an attacker in various ways. For one thing, an attacker might be able to "recognize" specific files or kinds of files from a pattern of file sizes. More subtle dangers may also exist, depending on the circumstances, for example the famous "CRIME" attack on SSL (http://security.stackexchange.com/questions/19911/crime-how-to-beat-the-beast-successor/19914#19914) which depends crucially on the attacker being able to measure the exact size of certain encrypted output. Ticket #925 is about how potentially interesting metadata about the LAFS filesystem itself can be inferred from the byte-level granularity of exposed sizes.

I propose that LAFS automatically add a randomized number of padding bytes to files when encrypting. Concretely, how about something like this. With F as the file size in bytes,

  1. Let the "max padding", X, be 32*ceil(log₂(F)).

  2. Choose a number of padding bytes, P, evenly from [0..X) as determined by the encryption key. Note: this is important that the number is deterministic from the key, so that multiple encryptions of the same-keyed file will not pick different random numbers and allow an attacker to statistically observe the padding's size. Be sure the pad length gets derived from the key via a strongly one-way path.

  3. Append P bytes of padding (0 bytes) to the plaintext before encryption. (This does not affect how the key is derived from the plaintext in the case of convergent encryption.)

Even though LAFS keeps the contents of files secret from attackers, it currently exposes the length (in bytes) of that content. This can be useful information to an attacker in various ways. For one thing, an attacker might be able to "recognize" specific files or kinds of files from a pattern of file sizes. More subtle dangers may also exist, depending on the circumstances, for example the famous "CRIME" attack on SSL (<http://security.stackexchange.com/questions/19911/crime-how-to-beat-the-beast-successor/19914#19914>) which depends crucially on the attacker being able to measure the exact size of certain encrypted output. Ticket #925 is about how potentially interesting metadata about the LAFS filesystem itself can be inferred from the byte-level granularity of exposed sizes. I propose that LAFS automatically add a randomized number of padding bytes to files when encrypting. Concretely, how about something like this. With `F` as the file size in bytes, 1. Let the "max padding", `X`, be `32*ceil(log₂(F))`. 2. Choose a number of padding bytes, `P`, evenly from `[0..X)` as determined by the encryption key. *Note: this is important that the number is deterministic from the key, so that multiple encryptions of the same-keyed file will not pick different random numbers and allow an attacker to statistically observe the padding's size.* Be sure the pad length gets derived from the key via a strongly one-way path. 3. Append `P` bytes of padding (0 bytes) to the plaintext before encryption. (This does not affect how the key is derived from the plaintext in the case of convergent encryption.)
zooko added the
c/code-encoding
p/normal
t/enhancement
v/1.10.0
labels 2013-07-07 22:50:46 +00:00
zooko added this to the undecided milestone 2013-07-07 22:50:46 +00:00
Author

Started a mailing list conversation: [//pipermail/tahoe-dev/2013-July/008492.html https://tahoe-lafs.org/pipermail/tahoe-dev/2013-July/008492.html]

Started a mailing list conversation: [//pipermail/tahoe-dev/2013-July/008492.html <https://tahoe-lafs.org/pipermail/tahoe-dev/2013-July/008492.html>]

Is there an advantage to random padding instead of just padding up to some fixed interval?

If someone uploads many different files which are all the exact same size, random padding will not stop attackers from inferring what that size is.

Is there an advantage to random padding instead of just padding up to some fixed interval? If someone uploads many different files which are all the exact same size, random padding will not stop attackers from inferring what that size is.
nickm commented 2013-07-08 20:01:05 +00:00
Owner

Here's how to do the analysis.

Look at what information the attacker sees over time, and what the
attacker is trying to learn. Consider how fast they can learn what
they like as Tahoe stands today. Then consider how fast they can
learn that with the proposed padding scheme.

Generally, padding many things to the same size tends to work better
than adding random amounts of padding to a lot of things. In the "pad
to same size" case, the attacker learns less from seeing the size of a
single object.

Don't forget object linkability in your analysis. That is, if certain
messages are likelier to be received together than two messages chosen
at random, then the attacker can make inferences over time, so you
can't just look at single-object probabilities in isolation.

Feel free to share this message wherever it will do good.

Here's how to do the analysis. Look at what information the attacker sees over time, and what the attacker is trying to learn. Consider how fast they can learn what they like as Tahoe stands today. Then consider how fast they can learn that with the proposed padding scheme. Generally, padding many things to the same size tends to work better than adding random amounts of padding to a lot of things. In the "pad to same size" case, the attacker learns less from seeing the size of a single object. Don't forget object linkability in your analysis. That is, if certain messages are likelier to be received together than two messages chosen at random, then the attacker can make inferences over time, so you can't just look at single-object probabilities in isolation. Feel free to share this message wherever it will do good.
Author

Comments from Marsh Ray:

  • I like the length hiding idea. Be sure the pad length gets derived from the key via a strongly one-way path.
  • Would file access patterns reveal the amount of padding? Would it ever make sense to distribute padding over the whole file?
  • "you might use 32*ceil(log₂(F)) to hide F a little better"

from https://twitter.com/marshray/status/354028204446584832

Comments from Marsh Ray: * I like the length hiding idea. Be sure the pad length gets derived from the key via a strongly one-way path. * Would file access patterns reveal the amount of padding? Would it ever make sense to distribute padding over the whole file? * "you might use 32*ceil(log₂(F)) to hide F a little better" from <https://twitter.com/marshray/status/354028204446584832>

+1 on the need for a threat model (mentioned on the list by Greg Troxel).

A threat model is really important so that we notice conflicting design goals, or unnecessary complexity.

An example conflict of goals: consider a threat model with an attacker who only operates a storage node and has no resources outside of that storage node, and consider two features: range requests versus "size confidentiality" through padding.

An incremental update to a byte range reveals that that range is interesting, and probably not padding. A lack of byte range updates means updates require full file uploads, which is a large usability cost.

Range updates can also potentially reveal information through layers outside of LAFS! Suppose a user is using an encrypted loop-back filesystem stored in a single "local filesystem file", but that single file happens to be backed by some magic LAFS goo that "smartly" notices only a range has been altered, and only sends updates for that range. Now the user changes a small secret stored inside the loop-back encrypted filesystem, and that translates to a tiny range request a storage node operator could see, whose size is close to the tiny secret size.

So, are bup-style hash splitting or LDMF-style deltas with individual padding superior to range updates? We can't answer this unless we have a threat model and we also prioritize other features against defense-features for that threat model.

+1 on the need for a threat model (mentioned on the list by Greg Troxel). A threat model is really important so that we notice conflicting design goals, or unnecessary complexity. An example conflict of goals: consider a threat model with an attacker who only operates a storage node and has no resources outside of that storage node, and consider two features: range requests versus "size confidentiality" through padding. An incremental update to a byte range reveals that that range is interesting, and probably not padding. A lack of byte range updates means updates require full file uploads, which is a large usability cost. Range updates can also potentially reveal information through layers outside of LAFS! Suppose a user is using an encrypted loop-back filesystem stored in a single "local filesystem file", *but* that single file happens to be backed by some magic LAFS goo that "smartly" notices only a range has been altered, and only sends updates for that range. Now the user changes a small secret stored inside the loop-back encrypted filesystem, and that translates to a tiny range request a storage node operator could see, whose size is close to the tiny secret size. So, are bup-style hash splitting or `LDMF`-style deltas with individual padding superior to range updates? We can't answer this unless we have a threat model and we also prioritize other features against defense-features for that threat model.

A natural starting place for threat modeling attacker capabilities would be the operator of a single storage node. Here's how to get started in your career of blackmailing LAFS users:

  1. run a storage node and use find, ls, and the like to examine the filesystem metadata on shares. (This could give sizes, creation times, modification times, access times.)
  2. examine local share contents using any tools at your disposal. (What can this tell an attacker about shares? Serial numbers? Signing keys? merkle tree roots?)
  3. turn up logging, and modify the storage node code to log protocol-level requests data of interest. (This could give client IPs, more precise timing information, range requests on shares.)
A natural starting place for threat modeling attacker capabilities would be the operator of a single storage node. Here's how to get started in your career of blackmailing `LAFS` users: 1. run a storage node and use `find`, `ls`, and the like to examine the filesystem metadata on shares. (This could give sizes, creation times, modification times, access times.) 1. examine local share contents using any tools at your disposal. (What can this tell an attacker about shares? Serial numbers? Signing keys? merkle tree roots?) 1. turn up logging, and modify the storage node code to log protocol-level requests data of interest. (This could give client IPs, more precise timing information, range requests on shares.)
Author

I'm not sure how to proceed with the threat model you suggest, nejucomo. The avenue of information that you mention (range updates) is closely to the one Marsh Ray mentioned (comment:394136).

At the moment I feel blocked, because I'm not sure how to proceed with the threat model. My feeling is, if I want to make progress on this I should ignore the whole idea of a threat model and move ahead with implementation! That sounds wrong, but what's the next-step on the threat model?

I'm not sure how to proceed with the threat model you suggest, nejucomo. The avenue of information that you mention (range updates) is closely to the one Marsh Ray mentioned ([comment:394136](/tahoe-lafs/trac/issues/2018#issuecomment-394136)). At the moment I feel blocked, because I'm not sure how to proceed with the threat model. My feeling is, if I want to make progress on this I should ignore the whole idea of a threat model and move ahead with implementation! That sounds wrong, but what's the next-step on the threat model?
nejucomo was assigned by zooko 2013-07-23 00:33:53 +00:00

I added keyword research because I believe there's opportunity for academic research contributing greatly here.

Note: I'm quite unfamiliar with the relevant literature so this may already be a well understood problem.

I added keyword `research` because I believe there's opportunity for academic research contributing greatly here. Note: I'm quite unfamiliar with the relevant literature so this may already be a well understood problem.

Replying to zooko:

I'm not sure how to proceed with the threat model you suggest, nejucomo. The avenue of information that you mention (range updates) is closely to the one Marsh Ray mentioned (comment:394136).

At the moment I feel blocked, because I'm not sure how to proceed with the threat model. My feeling is, if I want to make progress on this I should ignore the whole idea of a threat model and move ahead with implementation! That sounds wrong, but what's the next-step on the threat model?

Let's start here:

  • What specific confidentiality claims can we make to users about this feature?

Perhaps we should review existing threat models for other storage applications which have confidentiality-from-storage-operator requirements.

Another way to approach the threat model is to show a tangible vulnerability for confidentiality of the current LAFS storage (without padding). Then demonstrate that padding thwarts that to some degree. Some interesting challenges:

As a storage operator can I deduce with > 50% accuracy that a particular share:

  • represents a directory?
  • represents a directory with N children.
  • represents a directory with N children with a total of K bytes of child edge names.
  • is part of a well known specific directory structure? eg. the linux source.
  • is part of a well known general directory structure? eg. "this is a git repository" or "this is a linux home directory".
  • is a well known general file format? eg. "this is an ssh private key"
  • correlations amongst the above. eg. given the probability that share A is a home directory and B is a .ssh directory inside it, the probability that share C is a private ssh key is P.
Replying to [zooko](/tahoe-lafs/trac/issues/2018#issuecomment-394139): > I'm not sure how to proceed with the threat model you suggest, nejucomo. The avenue of information that you mention (range updates) is closely to the one Marsh Ray mentioned ([comment:394136](/tahoe-lafs/trac/issues/2018#issuecomment-394136)). > > At the moment I feel blocked, because I'm not sure how to proceed with the threat model. My feeling is, if I want to make progress on this I should ignore the whole idea of a threat model and move ahead with implementation! That sounds wrong, but what's the next-step on the threat model? Let's start here: * What specific confidentiality claims can we make to users about this feature? Perhaps we should review existing threat models for other storage applications which have confidentiality-from-storage-operator requirements. Another way to approach the threat model is to show a tangible vulnerability for confidentiality of the current `LAFS` storage (without padding). Then demonstrate that padding thwarts that to some degree. Some interesting challenges: As a storage operator can I deduce with > 50% accuracy that a particular share: * represents a directory? * represents a directory with N children. * represents a directory with N children with a total of K bytes of child edge names. * is part of a well known specific directory structure? *eg.* the `linux` source. * is part of a well known general directory structure? *eg.* "this is a git repository" or "this is a linux home directory". * is a well known general file format? *eg.* "this is an ssh private key" * correlations amongst the above. *eg.* given the probability that share A is a home directory and B is a `.ssh` directory inside it, the probability that share C is a private ssh key is P.

Replying to zooko:

[...]

My feeling is, if I want to make progress on this I should ignore the whole idea of a threat model and move ahead with implementation! That sounds wrong, but what's the next-step on the threat model?

I wouldn't want to implement this feature yet without hearing more people's answers to these questions:

  • Do we believe range updates compromise the benefits of padding? Under what conditions?
  • Will the implementation only work for files which disallow range updates (such as immutables or SDMF files)?
  • If so, how will the UI indicate to users which files / file types have which size confidentiality features?
  • -or, will users decide whether they prefer range updates or size confidentiality on a global scale, and change a configuration setting?
  • What should the default setting be?
  • Will we spend more effort improving confidentiality in this way, or more time improving update efficiency, or will we split our time between two features which may counteract each other?
  • Are there other approaches to efficient update that work well with other approaches to size confidentiality?

For the last question, I'm interested in the proposed LDMF format which would store file data as snapshots with deltas. The range offsets in the deltas can be encrypted for confidentiality. The file content of the delta itself could be padded. Now a storage operator can see the timing and padded size of deltas, but not which parts of a file they update, nor the exact size of the delta contents.

So that last question is an example of how we might realize that we'd rather invest engineering into snapshot+delta formats instead of flat files with range updates.

Replying to [zooko](/tahoe-lafs/trac/issues/2018#issuecomment-394139): [...] > My feeling is, if I want to make progress on this I should ignore the whole idea of a threat model and move ahead with implementation! That sounds wrong, but what's the next-step on the threat model? I wouldn't want to implement this feature yet without hearing more people's answers to these questions: * Do we believe range updates compromise the benefits of padding? Under what conditions? * Will the implementation only work for files which disallow range updates (such as immutables or `SDMF` files)? * If so, how will the UI indicate to users which files / file types have which size confidentiality features? * -or, will users decide whether they prefer range updates or size confidentiality on a global scale, and change a configuration setting? * What should the default setting be? * Will we spend more effort improving confidentiality in this way, or more time improving update efficiency, or will we split our time between two features which may counteract each other? * Are there other approaches to efficient update that work well with other approaches to size confidentiality? For the last question, I'm interested in the proposed `LDMF` format which would store file data as snapshots with deltas. The range offsets in the deltas can be encrypted for confidentiality. The file content of the delta itself could be padded. Now a storage operator can see the timing and padded size of deltas, but not which parts of a file they update, nor the exact size of the delta contents. So that last question is an example of how we might realize that we'd rather invest engineering into snapshot+delta formats instead of flat files with range updates.
Author

Edited the ticket description to use two of Marsh's suggestions from comment:394136. (The third suggestion is potentially really important, too, but doesn't easily fit into the ticket description.)

Edited the ticket description to use two of Marsh's suggestions from [comment:394136](/tahoe-lafs/trac/issues/2018#issuecomment-394136). (The third suggestion is potentially really important, too, but doesn't easily fit into the ticket description.)
nickm commented 2014-02-10 17:24:49 +00:00
Owner

32*ceil(log2(F)) doesn't really hide file sizes so well; are you sure it's what you mean? If a file is 1.1MB long, should the maximum padding really be only 32 * 21 == 672 bytes? That doesn't seem big enough to obscure the true file size.

Here's a simple experiment I just did. It's not a very good one, but with any luck people can improve it to make it something more scientifically sound. I took all the music files on my computer (about 16000 of them), and built a database of their file sizes. (I chose music files because their sizes are already in a similar range to one another, without being overly homogeneous. And because I had a lot of them. More thinking will identify a better corpus.)

% find ~/Music/*/ -type f -print0 |xargs -0 ls -l  | perl -ne '@a = split; print "$a[4]\n";' > sizes
% wc -l sizes
16275 sizes

At the start, nearly every file size was unique.

% sort -n sizes | uniq -c  |grep '^ *1 ' |wc -l
16094

Next I tried a "round the size up to the nearest multiple of max_pad" rule, taking max_pad as 32*ceil(log2(F)).

% python ~/Music/fsize.py < sizes | sort -n |uniq -c | grep '^ *1 ' |wc -l 
9671

So, more than half of my files still have a unique size. That's probably not so good.

(I chose "rounding up" rather than "randomizing" since it makes the attacker's job strictly harder, and the analysis much simpler.)

Next I tried the rule: Let N = ceil(log2(F)). If N >= 5, let X = N - 5; otherwise let X = N. Let MaxPad = 2^X^. Round the file size up to the nearest multiple of MaxPad. This time:

% python ~/Music/fsize.py < sizes | sort -n |uniq -c | grep '^ *1 ' |wc -l 
65

Only 65 files had unique sizes. Note also that the padding overhead is about 1/32 of the file size, for large files.

(A better experiment would pick a better metric than "how many files have a unique size?" The count of unique-sized files doesn't capture the notion that having three files with the same padded size is better than having two files with the same padded size. Maybe something entropy-based would be the right thing here.)

32*ceil(log2(F)) doesn't really hide file sizes so well; are you sure it's what you mean? If a file is 1.1MB long, should the maximum padding _really_ be only 32 * 21 == 672 bytes? That doesn't seem big enough to obscure the true file size. Here's a simple experiment I just did. It's not a very good one, but with any luck people can improve it to make it something more scientifically sound. I took all the music files on my computer (about 16000 of them), and built a database of their file sizes. (I chose music files because their sizes are already in a similar range to one another, without being overly homogeneous. And because I had a lot of them. More thinking will identify a better corpus.) ``` % find ~/Music/*/ -type f -print0 |xargs -0 ls -l | perl -ne '@a = split; print "$a[4]\n";' > sizes % wc -l sizes 16275 sizes ``` At the start, nearly every file size was unique. ``` % sort -n sizes | uniq -c |grep '^ *1 ' |wc -l 16094 ``` Next I tried a "round the size up to the nearest multiple of max_pad" rule, taking max_pad as 32*ceil(log2(F)). ``` % python ~/Music/fsize.py < sizes | sort -n |uniq -c | grep '^ *1 ' |wc -l 9671 ``` So, more than half of my files still have a unique size. That's probably not so good. (I chose "rounding up" rather than "randomizing" since it makes the attacker's job strictly harder, and the analysis much simpler.) Next I tried the rule: Let N = ceil(log2(F)). If N >= 5, let X = N - 5; otherwise let X = N. Let MaxPad = 2^X^. Round the file size up to the nearest multiple of MaxPad. This time: ``` % python ~/Music/fsize.py < sizes | sort -n |uniq -c | grep '^ *1 ' |wc -l 65 ``` Only 65 files had unique sizes. Note also that the padding overhead is about 1/32 of the file size, for large files. (A better experiment would pick a better metric than "how many files have a unique size?" The count of unique-sized files doesn't capture the notion that having three files with the same padded size is better than having two files with the same padded size. Maybe something entropy-based would be the right thing here.)
nikita commented 2014-02-10 19:47:39 +00:00
Owner

Some other useful metrics: the minimum bucket size (which is essentially min-entropy) and the average bucket size (which I think is related to guessing entropy but I haven't done the math yet), where a bucket is a collection of files that pad to the same length.

Some other useful metrics: the minimum bucket size (which is essentially min-entropy) and the average bucket size (which I think is related to guessing entropy but I haven't done the math yet), where a bucket is a collection of files that pad to the same length.
Author

Hi, Nikita and NickM. I'm really perplexed about this whole topic, because I don't know what formal model applies to the real-world cases that I care about.

Just to throw out an example of a model, there's a set S of files, each with a size, and a player A picks some one file from S and reveals some function of that file's size and/or compression, and then the other player M tries to guess which file A picked. Is that something anyone cares about? I really don't know.

Something that I'm sure that I do care about is the CRIME/BEAST-style attacks on compression. I would feel better about LAFS's resistance to those if we added padding (of the kind described in the Description of this ticket).

Hi, Nikita and NickM. I'm really perplexed about this whole topic, because I don't know what formal model applies to the real-world cases that I care about. Just to throw out an example of a model, there's a set **S** of files, each with a size, and a player **A** picks some one file from **S** and reveals some function of that file's size and/or compression, and then the other player **M** tries to guess which file **A** picked. Is that something anyone cares about? I really don't know. Something that I'm sure that I do care about is the CRIME/BEAST-style attacks on compression. I would feel better about LAFS's resistance to those if we added padding (of the kind described in the Description of this ticket).
Sign in to join this conversation.
No labels
c/code
c/code-dirnodes
c/code-encoding
c/code-frontend
c/code-frontend-cli
c/code-frontend-ftp-sftp
c/code-frontend-magic-folder
c/code-frontend-web
c/code-mutable
c/code-network
c/code-nodeadmin
c/code-peerselection
c/code-storage
c/contrib
c/dev-infrastructure
c/docs
c/operational
c/packaging
c/unknown
c/website
kw:2pc
kw:410
kw:9p
kw:ActivePerl
kw:AttributeError
kw:DataUnavailable
kw:DeadReferenceError
kw:DoS
kw:FileZilla
kw:GetLastError
kw:IFinishableConsumer
kw:K
kw:LeastAuthority
kw:Makefile
kw:RIStorageServer
kw:StringIO
kw:UncoordinatedWriteError
kw:about
kw:access
kw:access-control
kw:accessibility
kw:accounting
kw:accounting-crawler
kw:add-only
kw:aes
kw:aesthetics
kw:alias
kw:aliases
kw:aliens
kw:allmydata
kw:amazon
kw:ambient
kw:annotations
kw:anonymity
kw:anonymous
kw:anti-censorship
kw:api_auth_token
kw:appearance
kw:appname
kw:apport
kw:archive
kw:archlinux
kw:argparse
kw:arm
kw:assertion
kw:attachment
kw:auth
kw:authentication
kw:automation
kw:avahi
kw:availability
kw:aws
kw:azure
kw:backend
kw:backoff
kw:backup
kw:backupdb
kw:backward-compatibility
kw:bandwidth
kw:basedir
kw:bayes
kw:bbfreeze
kw:beta
kw:binaries
kw:binutils
kw:bitcoin
kw:bitrot
kw:blacklist
kw:blocker
kw:blocks-cloud-deployment
kw:blocks-cloud-merge
kw:blocks-magic-folder-merge
kw:blocks-merge
kw:blocks-raic
kw:blocks-release
kw:blog
kw:bom
kw:bonjour
kw:branch
kw:branding
kw:breadcrumbs
kw:brians-opinion-needed
kw:browser
kw:bsd
kw:build
kw:build-helpers
kw:buildbot
kw:builders
kw:buildslave
kw:buildslaves
kw:cache
kw:cap
kw:capleak
kw:captcha
kw:cast
kw:centos
kw:cffi
kw:chacha
kw:charset
kw:check
kw:checker
kw:chroot
kw:ci
kw:clean
kw:cleanup
kw:cli
kw:cloud
kw:cloud-backend
kw:cmdline
kw:code
kw:code-checks
kw:coding-standards
kw:coding-tools
kw:coding_tools
kw:collection
kw:compatibility
kw:completion
kw:compression
kw:confidentiality
kw:config
kw:configuration
kw:configuration.txt
kw:conflict
kw:connection
kw:connectivity
kw:consistency
kw:content
kw:control
kw:control.furl
kw:convergence
kw:coordination
kw:copyright
kw:corruption
kw:cors
kw:cost
kw:coverage
kw:coveralls
kw:coveralls.io
kw:cpu-watcher
kw:cpyext
kw:crash
kw:crawler
kw:crawlers
kw:create-container
kw:cruft
kw:crypto
kw:cryptography
kw:cryptography-lib
kw:cryptopp
kw:csp
kw:curl
kw:cutoff-date
kw:cycle
kw:cygwin
kw:d3
kw:daemon
kw:darcs
kw:darcsver
kw:database
kw:dataloss
kw:db
kw:dead-code
kw:deb
kw:debian
kw:debug
kw:deep-check
kw:defaults
kw:deferred
kw:delete
kw:deletion
kw:denial-of-service
kw:dependency
kw:deployment
kw:deprecation
kw:desert-island
kw:desert-island-build
kw:design
kw:design-review-needed
kw:detection
kw:dev-infrastructure
kw:devpay
kw:directory
kw:directory-page
kw:dirnode
kw:dirnodes
kw:disconnect
kw:discovery
kw:disk
kw:disk-backend
kw:distribute
kw:distutils
kw:dns
kw:do_http
kw:doc-needed
kw:docker
kw:docs
kw:docs-needed
kw:dokan
kw:dos
kw:download
kw:downloader
kw:dragonfly
kw:drop-upload
kw:duplicity
kw:dusty
kw:earth-dragon
kw:easy
kw:ec2
kw:ecdsa
kw:ed25519
kw:egg-needed
kw:eggs
kw:eliot
kw:email
kw:empty
kw:encoding
kw:endpoint
kw:enterprise
kw:enum34
kw:environment
kw:erasure
kw:erasure-coding
kw:error
kw:escaping
kw:etag
kw:etch
kw:evangelism
kw:eventual
kw:example
kw:excess-authority
kw:exec
kw:exocet
kw:expiration
kw:extensibility
kw:extension
kw:failure
kw:fedora
kw:ffp
kw:fhs
kw:figleaf
kw:file
kw:file-descriptor
kw:filename
kw:filesystem
kw:fileutil
kw:fips
kw:firewall
kw:first
kw:floatingpoint
kw:flog
kw:foolscap
kw:forward-compatibility
kw:forward-secrecy
kw:forwarding
kw:free
kw:freebsd
kw:frontend
kw:fsevents
kw:ftp
kw:ftpd
kw:full
kw:furl
kw:fuse
kw:garbage
kw:garbage-collection
kw:gateway
kw:gatherer
kw:gc
kw:gcc
kw:gentoo
kw:get
kw:git
kw:git-annex
kw:github
kw:glacier
kw:globalcaps
kw:glossary
kw:google-cloud-storage
kw:google-drive-backend
kw:gossip
kw:governance
kw:grid
kw:grid-manager
kw:gridid
kw:gridsync
kw:grsec
kw:gsoc
kw:gvfs
kw:hackfest
kw:hacktahoe
kw:hang
kw:hardlink
kw:heartbleed
kw:heisenbug
kw:help
kw:helper
kw:hint
kw:hooks
kw:how
kw:how-to
kw:howto
kw:hp
kw:hp-cloud
kw:html
kw:http
kw:https
kw:i18n
kw:i2p
kw:i2p-collab
kw:illustration
kw:image
kw:immutable
kw:impressions
kw:incentives
kw:incident
kw:init
kw:inlineCallbacks
kw:inotify
kw:install
kw:installer
kw:integration
kw:integration-test
kw:integrity
kw:interactive
kw:interface
kw:interfaces
kw:interoperability
kw:interstellar-exploration
kw:introducer
kw:introduction
kw:iphone
kw:ipkg
kw:iputil
kw:ipv6
kw:irc
kw:jail
kw:javascript
kw:joke
kw:jquery
kw:json
kw:jsui
kw:junk
kw:key-value-store
kw:kfreebsd
kw:known-issue
kw:konqueror
kw:kpreid
kw:kvm
kw:l10n
kw:lae
kw:large
kw:latency
kw:leak
kw:leasedb
kw:leases
kw:libgmp
kw:license
kw:licenss
kw:linecount
kw:link
kw:linux
kw:lit
kw:localhost
kw:location
kw:locking
kw:logging
kw:logo
kw:loopback
kw:lucid
kw:mac
kw:macintosh
kw:magic-folder
kw:manhole
kw:manifest
kw:manual-test-needed
kw:map
kw:mapupdate
kw:max_space
kw:mdmf
kw:memcheck
kw:memory
kw:memory-leak
kw:mesh
kw:metadata
kw:meter
kw:migration
kw:mime
kw:mingw
kw:minimal
kw:misc
kw:miscapture
kw:mlp
kw:mock
kw:more-info-needed
kw:mountain-lion
kw:move
kw:multi-users
kw:multiple
kw:multiuser-gateway
kw:munin
kw:music
kw:mutability
kw:mutable
kw:mystery
kw:names
kw:naming
kw:nas
kw:navigation
kw:needs-review
kw:needs-spawn
kw:netbsd
kw:network
kw:nevow
kw:new-user
kw:newcaps
kw:news
kw:news-done
kw:news-needed
kw:newsletter
kw:newurls
kw:nfc
kw:nginx
kw:nixos
kw:no-clobber
kw:node
kw:node-url
kw:notification
kw:notifyOnDisconnect
kw:nsa310
kw:nsa320
kw:nsa325
kw:numpy
kw:objects
kw:old
kw:openbsd
kw:openitp-packaging
kw:openssl
kw:openstack
kw:opensuse
kw:operation-helpers
kw:operational
kw:operations
kw:ophandle
kw:ophandles
kw:ops
kw:optimization
kw:optional
kw:options
kw:organization
kw:os
kw:os.abort
kw:ostrom
kw:osx
kw:osxfuse
kw:otf-magic-folder-objective1
kw:otf-magic-folder-objective2
kw:otf-magic-folder-objective3
kw:otf-magic-folder-objective4
kw:otf-magic-folder-objective5
kw:otf-magic-folder-objective6
kw:p2p
kw:packaging
kw:partial
kw:password
kw:path
kw:paths
kw:pause
kw:peer-selection
kw:performance
kw:permalink
kw:permissions
kw:persistence
kw:phone
kw:pickle
kw:pip
kw:pipermail
kw:pkg_resources
kw:placement
kw:planning
kw:policy
kw:port
kw:portability
kw:portal
kw:posthook
kw:pratchett
kw:preformance
kw:preservation
kw:privacy
kw:process
kw:profile
kw:profiling
kw:progress
kw:proxy
kw:publish
kw:pyOpenSSL
kw:pyasn1
kw:pycparser
kw:pycrypto
kw:pycrypto-lib
kw:pycryptopp
kw:pyfilesystem
kw:pyflakes
kw:pylint
kw:pypi
kw:pypy
kw:pysqlite
kw:python
kw:python3
kw:pythonpath
kw:pyutil
kw:pywin32
kw:quickstart
kw:quiet
kw:quotas
kw:quoting
kw:raic
kw:rainhill
kw:random
kw:random-access
kw:range
kw:raspberry-pi
kw:reactor
kw:readonly
kw:rebalancing
kw:recovery
kw:recursive
kw:redhat
kw:redirect
kw:redressing
kw:refactor
kw:referer
kw:referrer
kw:regression
kw:rekey
kw:relay
kw:release
kw:release-blocker
kw:reliability
kw:relnotes
kw:remote
kw:removable
kw:removable-disk
kw:rename
kw:renew
kw:repair
kw:replace
kw:report
kw:repository
kw:research
kw:reserved_space
kw:response-needed
kw:response-time
kw:restore
kw:retrieve
kw:retry
kw:review
kw:review-needed
kw:reviewed
kw:revocation
kw:roadmap
kw:rollback
kw:rpm
kw:rsa
kw:rss
kw:rst
kw:rsync
kw:rusty
kw:s3
kw:s3-backend
kw:s3-frontend
kw:s4
kw:same-origin
kw:sandbox
kw:scalability
kw:scaling
kw:scheduling
kw:schema
kw:scheme
kw:scp
kw:scripts
kw:sdist
kw:sdmf
kw:security
kw:self-contained
kw:server
kw:servermap
kw:servers-of-happiness
kw:service
kw:setup
kw:setup.py
kw:setup_requires
kw:setuptools
kw:setuptools_darcs
kw:sftp
kw:shared
kw:shareset
kw:shell
kw:signals
kw:simultaneous
kw:six
kw:size
kw:slackware
kw:slashes
kw:smb
kw:sneakernet
kw:snowleopard
kw:socket
kw:solaris
kw:space
kw:space-efficiency
kw:spam
kw:spec
kw:speed
kw:sqlite
kw:ssh
kw:ssh-keygen
kw:sshfs
kw:ssl
kw:stability
kw:standards
kw:start
kw:startup
kw:static
kw:static-analysis
kw:statistics
kw:stats
kw:stats_gatherer
kw:status
kw:stdeb
kw:storage
kw:streaming
kw:strports
kw:style
kw:stylesheet
kw:subprocess
kw:sumo
kw:survey
kw:svg
kw:symlink
kw:synchronous
kw:tac
kw:tahoe-*
kw:tahoe-add-alias
kw:tahoe-admin
kw:tahoe-archive
kw:tahoe-backup
kw:tahoe-check
kw:tahoe-cp
kw:tahoe-create-alias
kw:tahoe-create-introducer
kw:tahoe-debug
kw:tahoe-deep-check
kw:tahoe-deepcheck
kw:tahoe-lafs-trac-stream
kw:tahoe-list-aliases
kw:tahoe-ls
kw:tahoe-magic-folder
kw:tahoe-manifest
kw:tahoe-mkdir
kw:tahoe-mount
kw:tahoe-mv
kw:tahoe-put
kw:tahoe-restart
kw:tahoe-rm
kw:tahoe-run
kw:tahoe-start
kw:tahoe-stats
kw:tahoe-unlink
kw:tahoe-webopen
kw:tahoe.css
kw:tahoe_files
kw:tahoewapi
kw:tarball
kw:tarballs
kw:tempfile
kw:templates
kw:terminology
kw:test
kw:test-and-set
kw:test-from-egg
kw:test-needed
kw:testgrid
kw:testing
kw:tests
kw:throttling
kw:ticket999-s3-backend
kw:tiddly
kw:time
kw:timeout
kw:timing
kw:to
kw:to-be-closed-on-2011-08-01
kw:tor
kw:tor-protocol
kw:torsocks
kw:tox
kw:trac
kw:transparency
kw:travis
kw:travis-ci
kw:trial
kw:trickle
kw:trivial
kw:truckee
kw:tub
kw:tub.location
kw:twine
kw:twistd
kw:twistd.log
kw:twisted
kw:twisted-14
kw:twisted-trial
kw:twitter
kw:twn
kw:txaws
kw:type
kw:typeerror
kw:ubuntu
kw:ucwe
kw:ueb
kw:ui
kw:unclean
kw:uncoordinated-writes
kw:undeletable
kw:unfinished-business
kw:unhandled-error
kw:unhappy
kw:unicode
kw:unit
kw:unix
kw:unlink
kw:update
kw:upgrade
kw:upload
kw:upload-helper
kw:uri
kw:url
kw:usability
kw:use-case
kw:utf-8
kw:util
kw:uwsgi
kw:ux
kw:validation
kw:variables
kw:vdrive
kw:verify
kw:verlib
kw:version
kw:versioning
kw:versions
kw:video
kw:virtualbox
kw:virtualenv
kw:vista
kw:visualization
kw:visualizer
kw:vm
kw:volunteergrid2
kw:volunteers
kw:vpn
kw:wapi
kw:warners-opinion-needed
kw:warning
kw:weapi
kw:web
kw:web.port
kw:webapi
kw:webdav
kw:webdrive
kw:webport
kw:websec
kw:website
kw:websocket
kw:welcome
kw:welcome-page
kw:welcomepage
kw:wiki
kw:win32
kw:win64
kw:windows
kw:windows-related
kw:winscp
kw:workaround
kw:world-domination
kw:wrapper
kw:write-enabler
kw:wui
kw:x86
kw:x86-64
kw:xhtml
kw:xml
kw:xss
kw:zbase32
kw:zetuptoolz
kw:zfec
kw:zookos-opinion-needed
kw:zope
kw:zope.interface
p/blocker
p/critical
p/major
p/minor
p/normal
p/supercritical
p/trivial
r/cannot reproduce
r/duplicate
r/fixed
r/invalid
r/somebody else's problem
r/was already fixed
r/wontfix
r/worksforme
t/defect
t/enhancement
t/task
v/0.2.0
v/0.3.0
v/0.4.0
v/0.5.0
v/0.5.1
v/0.6.0
v/0.6.1
v/0.7.0
v/0.8.0
v/0.9.0
v/1.0.0
v/1.1.0
v/1.10.0
v/1.10.1
v/1.10.2
v/1.10a2
v/1.11.0
v/1.12.0
v/1.12.1
v/1.13.0
v/1.14.0
v/1.15.0
v/1.15.1
v/1.2.0
v/1.3.0
v/1.4.1
v/1.5.0
v/1.6.0
v/1.6.1
v/1.7.0
v/1.7.1
v/1.7β
v/1.8.0
v/1.8.1
v/1.8.2
v/1.8.3
v/1.8β
v/1.9.0
v/1.9.0-s3branch
v/1.9.0a1
v/1.9.0a2
v/1.9.0b1
v/1.9.1
v/1.9.2
v/1.9.2a1
v/cloud-branch
v/unknown
No milestone
No project
No assignees
4 participants
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: tahoe-lafs/trac#2018
No description provided.