build/install should be able to refrain from getting dependencies #1220

Closed
opened 2010-10-04 23:43:44 +00:00 by gdt · 31 comments
Owner

In a managed package system, each program's dependencies are expressed in control files and provided before the package builds. If the package has more dependencies than expresssed, the right behavior is failure so that this can be fixed, and it is unhelpful to download/install code either from included eggs or especially from the net.

There are two parts to this problem. One is downloading and installing things like py-cryptopp. The other is that tahoe seems to have to need modified versions of standard tools and has included eggs. This kind of divergence should be resolved.

I realize that this complaint is perhaps directed at setuptools, but tahoe-lafs inherits responsibility.

A reasonable solution would be to have a switch that packaging systems can add.

I put this on packaging even though the bug is in tahoe-lafs, not in any packaging of it.

In a managed package system, each program's dependencies are expressed in control files and provided before the package builds. If the package has more dependencies than expresssed, the right behavior is failure so that this can be fixed, and it is unhelpful to download/install code either from included eggs or especially from the net. There are two parts to this problem. One is downloading and installing things like py-cryptopp. The other is that tahoe seems to have to need modified versions of standard tools and has included eggs. This kind of divergence should be resolved. I realize that this complaint is perhaps directed at setuptools, but tahoe-lafs inherits responsibility. A reasonable solution would be to have a switch that packaging systems can add. I put this on packaging even though the bug is in tahoe-lafs, not in any packaging of it.
tahoe-lafs added the
c/packaging
p/major
t/defect
v/1.8.0
labels 2010-10-04 23:43:44 +00:00
tahoe-lafs added this to the undecided milestone 2010-10-04 23:43:44 +00:00

I just remembered that there is the --single-version-externally-managed flag. If you pass that flag as an argument to python setup.py install then it will suppress all automated fetching of dependencies. We test the use of this flag on all of our buildbots--look at the buildsteps called "install-to-prefix", e.g. this one on NetBSD and "test-from-prefixdir", e.g. this one on NetBSD. "install-to-prefix" does an install using --single-version-externally-managed to suppress automated resolution of dependencies, and "test-from-prefixdir" runs the unit tests in the resulting target directory where it was installed to.

Please try adding --single-version-externally-managed and see if that is sufficient to close this ticket.

I just remembered that there is the `--single-version-externally-managed` flag. If you pass that flag as an argument to `python setup.py install` then it will suppress all automated fetching of dependencies. We test the use of this flag on all of our buildbots--look at the buildsteps called "install-to-prefix", e.g. [this one on NetBSD](http://tahoe-lafs.org/buildbot/builders/MM%20netbsd5%20i386%20warp/builds/116/steps/install-to-prefix) and "test-from-prefixdir", e.g. [this one on NetBSD](http://tahoe-lafs.org/buildbot/builders/MM%20netbsd5%20i386%20warp/builds/116/steps/test-from-prefixdir). "install-to-prefix" does an install using `--single-version-externally-managed` to suppress automated resolution of dependencies, and "test-from-prefixdir" runs the unit tests in the resulting target directory where it was installed to. Please try adding `--single-version-externally-managed` and see if that is sufficient to close this ticket.
Author
Owner

I don't see how a flag passed at install time would really fix the issue. What I would like is to tell the build step to not install missing dependencies.

I don't see how a flag passed at install time would really fix the issue. What I would like is to tell the build step to not install missing dependencies.

Well, can you (either of you) show me a script that is used to package Python applications for your system? I imagine that you could do something like this:

tar xf $SOURCE_DISTRIBUTION
cd $TOP_LEVEL_DIR
python setup.py install --prefix=$TARGETDIR --single-version-externally-managed --record=list_of_installed_files.txt

Then collect all the files that got written into $TARGETDIR and put them into your newly created package. This should work with any setuptools-built Python package.

But, if that's not how you do it, then show me how you do it and I'll see if I can help make it so that the setuptools automatic resolution of dependencies gets out of your way.

Well, can you (either of you) show me a script that is used to package Python applications for your system? I imagine that you could do something like this: ``` tar xf $SOURCE_DISTRIBUTION cd $TOP_LEVEL_DIR python setup.py install --prefix=$TARGETDIR --single-version-externally-managed --record=list_of_installed_files.txt ``` Then collect all the files that got written into `$TARGETDIR` and put them into your newly created package. This should work with any setuptools-built Python package. But, if that's not how you do it, then show me how you do it and I'll see if I can help make it so that the setuptools automatic resolution of dependencies gets out of your way.
Author
Owner

Here's a log of building under pkgsrc. You can see that it's basically setup.py build (with the presetup to have a symlink tree of allowed libaries, so that only expressed dependencies are available). build doesn't have --single-version-externally-managed but install does. So are you saying that I should pass --single-version-externally-managed to the build phase as well?

pkgsrc-build-log.txt

Here's a log of building under pkgsrc. You can see that it's basically setup.py build (with the presetup to have a symlink tree of allowed libaries, so that only expressed dependencies are available). build doesn't have --single-version-externally-managed but install does. So are you saying that I should pass --single-version-externally-managed to the build phase as well? [pkgsrc-build-log.txt](http://pubgrid.tahoe-lafs.org/uri/URI%3ACHK%3Ap5jcx2zwxg3encam6kr4535tu4%3At3yrfpmvkbolyk7kgklitszcdguhc3ypqv46qawwyg5horsnnqpq%3A2%3A7%3A101689)

Okay, thanks for the log! My current thought is: do we need the build step for anything? What happens if you just comment-out that step and head straight for the install step? As far as I know, that will work, and will also completely avoid any automated downloading of any dependencies (since the install step already has --single-version-externally-managed). Tahoe-LAFS doesn't have any native code modules that need to be compiled, but even if it did (or if you used this same script for a different Python package which did have native code modules) then I think running python setup.py install would automatically build those native code modules, so I don't think you really need to invoke python setup.py build directly.

Okay, thanks for the log! My current thought is: do we need the `build` step for anything? What happens if you just comment-out that step and head straight for the `install` step? As far as I know, that will work, and will also completely avoid any automated downloading of any dependencies (since the `install` step already has `--single-version-externally-managed`). Tahoe-LAFS doesn't have any native code modules that need to be compiled, but even if it did (or if you used this same script for a different Python package which did have native code modules) then I think running `python setup.py install` would automatically build those native code modules, so I don't think you really need to invoke `python setup.py build` directly.

I just ran a quick manual test locally, and python setup.py build --single-version-externally-managed gives an error message saying that "--single-version-externally-managed" is not a recognized option for "build", but python setup.py install --single-version-externally-managed --prefix=instdir --record=list-of-installed-files.txt correctly builds and installs without downloading any dependencies.

I just ran a quick manual test locally, and `python setup.py build --single-version-externally-managed` gives an error message saying that "--single-version-externally-managed" is not a recognized option for "build", but `python setup.py install --single-version-externally-managed --prefix=instdir --record=list-of-installed-files.txt` correctly builds and installs without downloading any dependencies.
Author
Owner

Replying to zooko:

[you just install but not build?]Can't

No, because pkgsrc requires that the build phase do all things that feel like what "make" should do, and stay within the working directory. Then install does what "make install" should do and puts compiled bits in a staging area. Then the package tar bundles up that staging area.

It seems odd to me that --single-version-externally-managed suppresses dependencies and is only valid at install. I had thought -svem was about changing the way the egg file is created, and the dep suppression seems to be a side effect.

Replying to [zooko](/tahoe-lafs/trac/issues/1220#issuecomment-381437): > [you just install but not build?]Can't No, because pkgsrc requires that the build phase do all things that feel like what "make" should do, and stay within the working directory. Then install does what "make install" should do and puts compiled bits in a staging area. Then the package tar bundles up that staging area. It seems odd to me that --single-version-externally-managed suppresses dependencies and is only valid at install. I had thought -svem was about changing the way the egg file is created, and the dep suppression seems to be a side effect.
Author
Owner

The real question for me is whether a build/install attempt would fail and refraing from getting dependencies in the case where they didn't already exist.

The real question for me is whether a build/install attempt would fail and refraing from getting dependencies in the case where they didn't already exist.

Replying to [gdt]comment:9:

Replying to zooko:

[you just install but not build?]Can't

No, because pkgsrc requires that the build phase do all things that feel like what "make" should do, and stay within the working directory. Then install does what "make install" should do and puts compiled bits in a staging area. Then the package tar bundles up that staging area.

But there aren't any compiled bits, so as far as I can tell if we force the build phase to be a no-op then we still satisfy the pkgsrc protocol. Alternately, if you let the build phase be python setup.py build (just like it currently is) instead of a no-op then we are still satisfying the protocol because it keeps all of the deps that it acquires within its working directory.

But maybe there is another requirement for the build phase besides what you wrote above, such as "no open connections to remote hosts" or perhaps even more importantly "no printing out messages that make the human think that you are installing deps".

Is one or both of those a requirement? Am I missing some other requirements on what the build phase is allowed/required to do?

It seems odd to me that --single-version-externally-managed suppresses dependencies and is only valid at install. I had thought -svem was about changing the way the egg file is created, and the dep suppression seems to be a side effect.

Why do you find this to be odd? Perhaps it is because you think of python setup.py build as the step that would create a egg if a egg were going to be created? It is not—if an egg were going to be created, that would be done in python setup.py install.

Replying to [gdt]comment:9: > Replying to [zooko](/tahoe-lafs/trac/issues/1220#issuecomment-381437): > > [you just install but not build?]Can't > > No, because pkgsrc requires that the build phase do all things that feel like what "make" should do, and stay within the working directory. Then install does what "make install" should do and puts compiled bits in a staging area. Then the package tar bundles up that staging area. But there aren't any compiled bits, so as far as I can tell if we force the build phase to be a no-op then we still satisfy the pkgsrc protocol. Alternately, if you let the build phase be `python setup.py build` (just like it currently is) instead of a no-op then we are still satisfying the protocol because it keeps all of the deps that it acquires within its working directory. But maybe there is another requirement for the build phase besides what you wrote above, such as "no open connections to remote hosts" or perhaps even more importantly "no printing out messages that make the human think that you are installing deps". Is one or both of those a requirement? Am I missing some other requirements on what the build phase is allowed/required to do? > It seems odd to me that --single-version-externally-managed suppresses dependencies and is only valid at install. I had thought -svem was about changing the way the egg file is created, and the dep suppression seems to be a side effect. Why do you find this to be odd? Perhaps it is because you think of `python setup.py build` as the step that would create a egg if a egg were going to be created? It is not—if an egg were going to be created, that would be done in `python setup.py install`.

Replying to gdt:

The real question for me is whether a build/install attempt would fail and refraing from getting dependencies in the case where they didn't already exist.

Oh, I see, so the requirement that I was missing on the "build" step is: "return non-zero exit code if any of the deps are missing".

Waitaminute, that's not truly a requirement. None of your C programs, for example, reliably do that do they? Or maybe some of them do nowadays by using a tool like pkg-config?

So, I'm still not 100% certain what you mean by "refrain from getting dependencies". Does my buildstep fail if it opens a TCP or HTTP connection but doesn't download any large files? Does it fail if it downloads a large file but that file isn't a dependency? What if it downloads a dependency as a .zip or a .tar but doesn't unpack it? What if it unpacks it but only into the current working directory (this is the one that it currently does)? What if it writes it into /usr/lib/python2.6/site-packages and then edits your /usr/lib/python2.6/site-packages/site.py script to change the way Python imports modules (this is the one that it would do if you ran sudo python setup.py install)? Does it matter whether it prints out messages describing what it is doing versus if it stays quiet? Does it matter how long it takes to finish the build step?

Replying to [gdt](/tahoe-lafs/trac/issues/1220#issuecomment-381440): > The real question for me is whether a build/install attempt would fail and refraing from getting dependencies in the case where they didn't already exist. Oh, I see, so the requirement that I was missing on the "build" step is: "return non-zero exit code if any of the deps are missing". Waitaminute, that's not truly a requirement. None of your C programs, for example, reliably do that do they? Or maybe some of them do nowadays by using a tool like [pkg-config](http://pkg-config.freedesktop.org/wiki/)? So, I'm still not 100% certain what you mean by "refrain from getting dependencies". Does my buildstep fail if it opens a TCP or HTTP connection but doesn't download any large files? Does it fail if it downloads a large file but that file isn't a dependency? What if it downloads a dependency as a .zip or a .tar but doesn't unpack it? What if it unpacks it but only into the current working directory (*this is the one that it currently does*)? What if it writes it into `/usr/lib/python2.6/site-packages` and then edits your `/usr/lib/python2.6/site-packages/site.py` script to change the way Python imports modules (*this is the one that it would do if you ran `sudo python setup.py install`*)? Does it matter whether it prints out messages describing what it is doing versus if it stays quiet? Does it matter how long it takes to finish the build step?

Replying to [zooko]comment:12:

Replying to gdt:

The real question for me is whether a build/install attempt would fail and refraing from getting dependencies in the case where they didn't already exist.

Oh, I see, so the requirement that I was missing on the "build" step is: "return non-zero exit code if any of the deps are missing".

Waitaminute, that's not truly a requirement. None of your C programs, for example, reliably do that do they? Or maybe some of them do nowadays by using a tool like pkg-config?

[to myself]following-up

Although we could potentially do better than C programs and actually satisfy this requirement of reliably exiting with non-zero exit code if all of the deps aren't already present. Is that what we should do? It sounds like we would be going over and above the normal requirements of a pkgsrc build step and if we were going to go that direction then we should try to generalize the hack so that all Python programs that are being built by pkgsrc would do the same. :-)

Replying to [zooko]comment:12: > Replying to [gdt](/tahoe-lafs/trac/issues/1220#issuecomment-381440): > > The real question for me is whether a build/install attempt would fail and refraing from getting dependencies in the case where they didn't already exist. > > Oh, I see, so the requirement that I was missing on the "build" step is: "return non-zero exit code if any of the deps are missing". > > Waitaminute, that's not truly a requirement. None of your C programs, for example, reliably do that do they? Or maybe some of them do nowadays by using a tool like [pkg-config](http://pkg-config.freedesktop.org/wiki/)? [to myself]following-up Although we could potentially do better than C programs and actually satisfy this requirement of reliably exiting with non-zero exit code if all of the deps aren't already present. Is that what we should do? It sounds like we would be going over and above the normal requirements of a pkgsrc build step and if we were going to go that direction then we should try to generalize the hack so that all Python programs that are being built by pkgsrc would do the same. :-)
Author
Owner

You raise good points about unarticulated requirements; a lot of them are captured in "what 'make' is supposed to do". So specifically, the build phase

  • should fail if any dependencies are missing. C programs use autoconf, or autoconf/pkg-config, and fail at configure phase. Or, they are old-school and do -lfoo and that fails at build time if libfoo is not installed. You are probably right that some C programs do not reliably fail, but they should.
  • must not use the net at all, and use only files expressed in the "distinfo" manifest and downloaded during the fetch phase, and unpacked in the working directory in the extract phase. If a file is needed it is listed in distinfo and make fetch gets it. (Without this, offline building fails and GPL compliance is difficult - how do you find the list of sources that must be distributed with the resulting binary package?)
  • must set up install so that the list of created files is always the same

An underlying goal is that building a package should have a deterministic outcome, with the same bits produced regardless of which dependencies or other programs were already installed. This allows the use of the resulting binary packages on other systems. If a program has an optional dependency foo, then the pkgsrc entry has to require foo (and thus depend on the foo package), or disable use of foo, or have a pkgsrc option to control it. Having the built package be built differently depending on whether foo is present is considered a packaging bug (and perhaps an upstream bug, if there's no --disable-foo switch/method).

It's also a goal to be able to 'make fetch-list|sh' on a net-connected machine and grab all distfiles but not build, and then to be able to build offline.

I see that there are .pyc files installed, but not produced during build. This seems wrong, but not important or causing an actual problem, and it seems to be the python way.

Basically, there's a huge difference in approach between large-scale package management systems and the various language-specific packaging systems. I suspect debian/ubuntu and rpms are much more like pkgsrc than not in their requirements. But, there seems not be a culture of bulk building all rpms in Linux; it seems the maintainers build them and upload them.

You raise good points about unarticulated requirements; a lot of them are captured in "what 'make' is supposed to do". So specifically, the build phase * should fail if any dependencies are missing. C programs use autoconf, or autoconf/pkg-config, and fail at configure phase. Or, they are old-school and do -lfoo and that fails at build time if libfoo is not installed. You are probably right that some C programs do not reliably fail, but they should. * must not use the net at all, and use only files expressed in the "distinfo" manifest and downloaded during the fetch phase, and unpacked in the working directory in the extract phase. If a file is needed it is listed in distinfo and make fetch gets it. (Without this, offline building fails and GPL compliance is difficult - how do you find the list of sources that must be distributed with the resulting binary package?) * must set up install so that the list of created files is always the same An underlying goal is that building a package should have a deterministic outcome, with the same bits produced regardless of which dependencies or other programs were already installed. This allows the use of the resulting binary packages on other systems. If a program has an optional dependency foo, then the pkgsrc entry has to require foo (and thus depend on the foo package), or disable use of foo, or have a pkgsrc option to control it. Having the built package be built differently depending on whether foo is present is considered a packaging bug (and perhaps an upstream bug, if there's no --disable-foo switch/method). It's also a goal to be able to 'make fetch-list|sh' on a net-connected machine and grab all distfiles but not build, and then to be able to build offline. I see that there are .pyc files installed, but not produced during build. This seems wrong, but not important or causing an actual problem, and it seems to be the python way. Basically, there's a huge difference in approach between large-scale package management systems and the various language-specific packaging systems. I suspect debian/ubuntu and rpms are much more like pkgsrc than not in their requirements. But, there seems not be a culture of bulk building all rpms in Linux; it seems the maintainers build them and upload them.
Author
Owner

I ran 'python2.6 install --single-version-externally-managed --root ../.destdir" without having run build, after uninstalling nevow. The install completed, and then running that tahoe failed on importing tahoe.

Having read setup.py and _auto_deps.py, I think the problem is in hand-written setup code in tahoe-lafs which needs a switch to require/fail vs require/fetch.

[problem isn't causing me lots of trouble; I simply check the build output when updating the package and manually consider it broken if it uses the net.]This

I ran 'python2.6 install --single-version-externally-managed --root ../.destdir" without having run build, after uninstalling nevow. The install completed, and then running that tahoe failed on importing tahoe. Having read setup.py and _auto_deps.py, I think the problem is in hand-written setup code in tahoe-lafs which needs a switch to require/fail vs require/fetch. [problem isn't causing me lots of trouble; I simply check the build output when updating the package and manually consider it broken if it uses the net.]This

Should we close this ticket, due to the existence of the --single-version-externally-managed flag for python setup.py install?

Should we close this ticket, due to the existence of the `--single-version-externally-managed` flag for `python setup.py install`?
Author
Owner

No, because a) --svem isn't usable during a build phase (install writes to the destination) and b) it doesn't check dependencies and fail. (This gives me the impression install is only supposed to be used after build.)

I don't mean to demand that anyone spend time on this, but I still think the setup.py code is incorrect compared to longstanding open source norms.

I would be curious to hear about how people who work on packaging for other systems deal with this issue.

No, because a) --svem isn't usable during a build phase (install writes to the destination) and b) it doesn't check dependencies and fail. (This gives me the impression install is only supposed to be used after build.) I don't mean to demand that anyone spend time on this, but I still think the setup.py code is incorrect compared to longstanding open source norms. I would be curious to hear about how people who work on packaging for other systems deal with this issue.
Author
Owner

This problem is an annoyance and increases the risk of packaging errors, but the resulting packages are ok. Therefore dropping to minor, which is probably should have been already.

This problem is an annoyance and increases the risk of packaging errors, but the resulting packages are ok. Therefore dropping to minor, which is probably should have been already.
tahoe-lafs added
p/minor
and removed
p/major
labels 2010-11-01 11:40:00 +00:00

Replying to gdt:

No, because a) --svem isn't usable during a build phase (install writes to the destination) and b) it doesn't check dependencies and fail. (This gives me the impression install is only supposed to be used after build.)

'setup.py install' and 'setup.py build' are alternatives. As far as I understand, it isn't intended that both be used.

I don't mean to demand that anyone spend time on this, but I still think the setup.py code is incorrect compared to longstanding open source norms.

I don't dispute that, but I favour making sure that a replacement for setuptools -- probably Brian's "unsuck" branch -- follows those norms by default, rather than continuing to hack at zetuptoolz. zooko's efforts with the latter are appreciated, but that approach has consumed an enormous amount of development effort, and is still causing obscure and often irreproducible bugs on our buildslaves and for our users.

Replying to [gdt](/tahoe-lafs/trac/issues/1220#issuecomment-381447): > No, because a) --svem isn't usable during a build phase (install writes to the destination) and b) it doesn't check dependencies and fail. (This gives me the impression install is only supposed to be used after build.) 'setup.py install' and 'setup.py build' are alternatives. As far as I understand, it isn't intended that both be used. > I don't mean to demand that anyone spend time on this, but I still think the setup.py code is incorrect compared to longstanding open source norms. I don't dispute that, but I favour making sure that a replacement for setuptools -- probably Brian's "unsuck" branch -- follows those norms by default, rather than continuing to hack at zetuptoolz. zooko's efforts with the latter are appreciated, but that approach has consumed an enormous amount of development effort, and is still causing obscure and often irreproducible bugs on our buildslaves and for our users.

I was just hacking at zetuptoolz and I noticed that there is already a method named url_ok() which implements the feature of excluding certain domain names from the set that you will download from. If we hack it to always return False (when the user has specified "no downloads") then this would be our implementation of this ticket. Here is the url_ok() method in zetuptoolz. Here is the current body of it:

    def url_ok(self, url, fatal=False):
        s = URL_SCHEME(url)
        if (s and s.group(1).lower()=='file') or self.allows(urlparse.urlparse(url)[1]):
            return True
        msg = "\nLink to % s ***BLOCKED*** by --allow-hosts\n"
        if fatal:
            raise DistutilsError(msg % url)
        else:
            self.warn(msg, url)
I was just hacking at zetuptoolz and I noticed that there is already a method named `url_ok()` which implements the feature of excluding certain domain names from the set that you will download from. If we hack it to always return `False` (when the user has specified "no downloads") then this would be our implementation of this ticket. Here is [the url_ok() method in zetuptoolz](http://tahoe-lafs.org/trac/zetuptoolz/browser/trunk/setuptools/package_index.py?annotate=blame&rev=580#L222). Here is the current body of it: ``` def url_ok(self, url, fatal=False): s = URL_SCHEME(url) if (s and s.group(1).lower()=='file') or self.allows(urlparse.urlparse(url)[1]): return True msg = "\nLink to % s ***BLOCKED*** by --allow-hosts\n" if fatal: raise DistutilsError(msg % url) else: self.warn(msg, url) ```

Replying to gdt:

No, because a) --svem isn't usable during a build phase (install writes to the destination)

How about this. I'm going to propose a build step and you have to tell me if you would accept any code that passes that build step or whether you have other requirements.

The buildstep starts with a pristine tarball of tahoe-lafs and unpacks it, then runs python setup.py justbuild. If the code under test emits any lines to stdout or stderr which have the phrase "Downloading http" then it is marked as red by this buildstep. (The implementation of this test is visible here: [misc/build_helpers/check-build.py]source:trunk/misc/build_helpers/check-build.py?annotate=blame&rev=4434#L15, which is invoked from here: [Makefile]source:trunk/Makefile?annotate=blame&rev=4847#L278)

Then the buildstep runs python setup.py justinstall --prefix=$PREFIXDIR. Then it executes $PREFIXDIR/bin/tahoe --version-and-path and if the code under test emits the right version and path then it is marked as green by this buildstep, else it is marked as red.

Now, one thing that this buildstep does not require of the code under test is that it detect missing dependencies or that it find and download missing dependencies. That would be cool, and you have requested it in this ticket, and I know how to implement it, but since that is above and beyond the standard packaging functionality that we're trying to emulate perhaps we should open a separate ticket and finish fixing the basic functionality first.

This means that the test can't give the code under test a fair chance of going green unless it is run on a system where all of the dependencies are already installed. As far as I understand, that's standard for this sort of packaging.

Replying to [gdt](/tahoe-lafs/trac/issues/1220#issuecomment-381447): > No, because a) --svem isn't usable during a build phase (install writes to the destination) How about this. I'm going to propose a build step and you have to tell me if you would accept any code that passes that build step or whether you have other requirements. The buildstep starts with a pristine tarball of tahoe-lafs and unpacks it, then runs `python setup.py justbuild`. If the code under test emits any lines to stdout or stderr which have the phrase "Downloading http" then it is marked as red by this buildstep. (The implementation of this test is visible here: [misc/build_helpers/check-build.py]source:trunk/misc/build_helpers/check-build.py?annotate=blame&rev=4434#L15, which is invoked from here: [Makefile]source:trunk/Makefile?annotate=blame&rev=4847#L278) Then the buildstep runs `python setup.py justinstall --prefix=$PREFIXDIR`. Then it executes `$PREFIXDIR/bin/tahoe --version-and-path` and if the code under test emits the right version and path then it is marked as green by this buildstep, else it is marked as red. Now, one thing that this buildstep does not require of the code under test is that it detect missing dependencies or that it find and download missing dependencies. That would be cool, and you have requested it in this ticket, and I know how to implement it, but since that is above and beyond the standard packaging functionality that we're trying to emulate perhaps we should open a separate ticket and finish fixing the basic functionality first. This means that the test can't give the code under test a fair chance of going green unless it is run on a system where all of the dependencies are already installed. As far as I understand, that's standard for this sort of packaging.

If you like this ticket, you might also like #1270 (have a separate build target to download any missing deps but not to compile or install them).

If you like this ticket, you might also like #1270 (have a separate build target to download any missing deps but not to compile or install them).

I don't consider this a minor issue, because the downloading from potentially insecure sites is a significant vulnerability (as we were recently reminded by SourceForge being compromised -- and setuptools will happily download from far less secure sites than SourceForge).

I don't consider this a minor issue, because the downloading from potentially insecure sites is a significant vulnerability (as we were recently reminded by [SourceForge being compromised](http://news.ycombinator.com/item?id=2150639) -- and setuptools will happily download from far less secure sites than SourceForge).
daira added
p/major
and removed
p/minor
labels 2011-01-29 04:34:30 +00:00

People were just wishing for related (but not identical) functionality on the distutils-sig mailing list and Barry Warsaw settled on patching setup.cfg of each Python project that he is building to add this stanza:

[easy_install]
allow_hosts: None

http://mail.python.org/pipermail/distutils-sig/2011-February/017400.html

But I still feel like this ticket is underspecified. Before I make further progress on this ticket I want someone who cares a lot about this issue to tell me whether the test procedure (which is a Buildbot "build step") in comment:21 would be sufficient.

People were just wishing for related (but not identical) functionality on the distutils-sig mailing list and Barry Warsaw settled on patching `setup.cfg` of each Python project that he is building to add this stanza: ``` [easy_install] allow_hosts: None ``` <http://mail.python.org/pipermail/distutils-sig/2011-February/017400.html> But I still feel like this ticket is underspecified. Before I make further progress on this ticket I want someone who cares a lot about this issue to tell me whether the test procedure (which is a Buildbot "build step") in comment:21 would be sufficient.

As Kyle mentioned on a mailing list thread, it would be nice if, when the build system detects that it already has everything it needs locally, then it doesn't look at the net at all. If this ticket were fixed, and we had the ability to refrain from getting dependencies, then we could also implement this added feature of "don't look at the net if you already have everything you need". I guess that should really be a separate ticket, but I honestly don't feel like going to all the effort to open a separate ticket.

I'll just re-iterate that if you want me, or anyone else, to make progress on this ticket, then please start by answering my questions from comment:21.

As Kyle mentioned on [a mailing list thread](https://tahoe-lafs.org/pipermail/tahoe-dev/2012-May/007339.html), it would be nice if, when the build system detects that it already has everything it needs locally, then it doesn't look at the net at all. If this ticket were fixed, and we had the ability to refrain from getting dependencies, then we could also implement this added feature of "don't look at the net if you already have everything you need". I guess that should really be a separate ticket, but I honestly don't feel like going to all the effort to open a separate ticket. I'll just re-iterate that if you want me, or anyone else, to make progress on this ticket, then please start by answering my questions from comment:21.

The "allow_hosts=None" configuration that Barry Warsaw was using (mentioned in comment:381458) is documented here:

The "allow_hosts=None" configuration that Barry Warsaw was using (mentioned in [comment:381458](/tahoe-lafs/trac/issues/1220#issuecomment-381458)) is documented here: * [setuptools doc](http://peak.telecommunity.com/DevCenter/EasyInstall#restricting-downloads-with-allow-hosts) * [distribute doc](http://packages.python.org/distribute/easy_install.html#restricting-downloads-with-allow-hosts)

pip has the following relevant options:

-d, --download <dir>

 Download packages into <dir> instead of installing
 them, regardless of what’s already installed.

--download-cache <dir>

 Cache downloaded packages in <dir>.

--src <dir>

 Directory to check out editable projects into.
 The default in a virtualenv is “<venv path>/src”.
 The default for global installs is
 “<current dir>/src”.

-U, --upgrade

 Upgrade all packages to the newest available
 version. This process is recursive regardless of
 whether a dependency is already satisfied.

--force-reinstall

 When upgrading, reinstall all packages even if
 they are already up-to-date.

-I, --ignore-installed

 Ignore the installed packages (reinstalling
 instead).

--no-deps

 Don’t install package dependencies.

--no-install

 Download and unpack all packages, but don’t
 actually install them.

--no-download

 Don’t download any packages, just install the
 ones already downloaded (completes an install run
 with –no-install).

These seem very comprehensive and useful!

`pip` has [the following relevant options](http://www.pip-installer.org/en/latest/usage.html): ``` -d, --download <dir> Download packages into <dir> instead of installing them, regardless of what’s already installed. --download-cache <dir> Cache downloaded packages in <dir>. --src <dir> Directory to check out editable projects into. The default in a virtualenv is “<venv path>/src”. The default for global installs is “<current dir>/src”. -U, --upgrade Upgrade all packages to the newest available version. This process is recursive regardless of whether a dependency is already satisfied. --force-reinstall When upgrading, reinstall all packages even if they are already up-to-date. -I, --ignore-installed Ignore the installed packages (reinstalling instead). --no-deps Don’t install package dependencies. --no-install Download and unpack all packages, but don’t actually install them. --no-download Don’t download any packages, just install the ones already downloaded (completes an install run with –no-install). ``` These seem very comprehensive and useful!
jmalcolm commented 2013-12-29 23:28:27 +00:00
Author
Owner

I don't fully understand Zooko's suggestion in /tahoe-lafs/trac/issues/27164#comment:21 above, probably because I know very little about python packaging. Here's what I would want:

  1. A way for there to be no network activity, of any kind, when building or installing Tahoe-LAFS

that implies:

  1. Whether or not network activity is available, a build or install should have the same behavior - either it works, as it can find all dependencies, or it can't, so it fails
I don't fully understand Zooko's suggestion in [/tahoe-lafs/trac/issues/27164](/tahoe-lafs/trac/issues/27164)#comment:21 above, probably because I know very little about python packaging. Here's what I would want: 1) A way for there to be no network activity, of any kind, when building or installing Tahoe-LAFS that implies: 2) Whether or not network activity is available, a build or install should have the same behavior - either it works, as it can find all dependencies, or it can't, so it fails

Replying to jmalcolm:

I don't fully understand Zooko's suggestion in /tahoe-lafs/trac/issues/27164#comment:21 above, probably because I know very little about python packaging. Here's what I would want:

jmalcolm: what you wrote there seems consistent with my proposal from comment:21.

Replying to [jmalcolm](/tahoe-lafs/trac/issues/1220#issuecomment-381462): > I don't fully understand Zooko's suggestion in [/tahoe-lafs/trac/issues/27164](/tahoe-lafs/trac/issues/27164)#comment:21 above, probably because I know very little about python packaging. Here's what I would want: jmalcolm: what you wrote there seems consistent with my proposal from comment:21.

On #2055, dstufft wrote:

FWIW pip --no-download is bad and you shouldn't use it. If you want to do that you should download the packages to a directory (you can use pip install --download <directory> [[package ...]package] for that) and then use pip install --no-index -find-links <directory> [[package ...]package].

On #2055, dstufft wrote: > FWIW `pip --no-download` is bad and you shouldn't use it. If you want to do that you should download the packages to a directory (you can use `pip install --download <directory> [[package ...]package]` for that) and then use `pip install --no-index -find-links <directory> [[package ...]package]`.

I think a good next-step on this is #2473 (stop using setup_requires).

I think a good next-step on this is #2473 (stop using `setup_requires`).

Another good next step on this is to take the "Desert Island" test (https://github.com/tahoe-lafs/tahoe-lafs/blame/15a1550ced5c3691061f4f07d3597078fef8814f/Makefile#L285) and copy it to make this test. The changes from the "Desert Island" test to this test are:

  1. This test starts with just the Tahoe-LAFS source; the Desert Island test starts with the SUMO package.
  2. This test runs python setup.py justbuild; the Desert Island test runs python setup.py build.
Another good next step on this is to take the "Desert Island" test (<https://github.com/tahoe-lafs/tahoe-lafs/blame/15a1550ced5c3691061f4f07d3597078fef8814f/Makefile#L285>) and copy it to make this test. The changes from the "Desert Island" test to this test are: 1. This test starts with just the Tahoe-LAFS source; the Desert Island test starts with the SUMO package. 2. This test runs `python setup.py justbuild`; the Desert Island test runs `python setup.py build`.

I think this should be resolved, now that we're using pip/virtualenv, and do not have a setup_requires= anymore. Packagers can use python setup.py install --single-version-externally-managed with a --root that points into a new directory, then turn that directory into a package. I believe this is how Debian currently does things, and by changing Tahoe to behave like every other python package, we should be able to take advantage of that machinery.

gdt, please feel free to re-open this if you disagree.

I think this should be resolved, now that we're using pip/virtualenv, and do not have a `setup_requires=` anymore. Packagers can use `python setup.py install --single-version-externally-managed` with a `--root` that points into a new directory, then turn that directory into a package. I believe this is how Debian currently does things, and by changing Tahoe to behave like every other python package, we should be able to take advantage of that machinery. gdt, please feel free to re-open this if you disagree.
warner added the
r/fixed
label 2016-03-26 21:27:07 +00:00
warner modified the milestone from undecided to 1.11.0 2016-03-26 21:27:07 +00:00
Sign in to join this conversation.
No labels
c/code
c/code-dirnodes
c/code-encoding
c/code-frontend
c/code-frontend-cli
c/code-frontend-ftp-sftp
c/code-frontend-magic-folder
c/code-frontend-web
c/code-mutable
c/code-network
c/code-nodeadmin
c/code-peerselection
c/code-storage
c/contrib
c/dev-infrastructure
c/docs
c/operational
c/packaging
c/unknown
c/website
kw:2pc
kw:410
kw:9p
kw:ActivePerl
kw:AttributeError
kw:DataUnavailable
kw:DeadReferenceError
kw:DoS
kw:FileZilla
kw:GetLastError
kw:IFinishableConsumer
kw:K
kw:LeastAuthority
kw:Makefile
kw:RIStorageServer
kw:StringIO
kw:UncoordinatedWriteError
kw:about
kw:access
kw:access-control
kw:accessibility
kw:accounting
kw:accounting-crawler
kw:add-only
kw:aes
kw:aesthetics
kw:alias
kw:aliases
kw:aliens
kw:allmydata
kw:amazon
kw:ambient
kw:annotations
kw:anonymity
kw:anonymous
kw:anti-censorship
kw:api_auth_token
kw:appearance
kw:appname
kw:apport
kw:archive
kw:archlinux
kw:argparse
kw:arm
kw:assertion
kw:attachment
kw:auth
kw:authentication
kw:automation
kw:avahi
kw:availability
kw:aws
kw:azure
kw:backend
kw:backoff
kw:backup
kw:backupdb
kw:backward-compatibility
kw:bandwidth
kw:basedir
kw:bayes
kw:bbfreeze
kw:beta
kw:binaries
kw:binutils
kw:bitcoin
kw:bitrot
kw:blacklist
kw:blocker
kw:blocks-cloud-deployment
kw:blocks-cloud-merge
kw:blocks-magic-folder-merge
kw:blocks-merge
kw:blocks-raic
kw:blocks-release
kw:blog
kw:bom
kw:bonjour
kw:branch
kw:branding
kw:breadcrumbs
kw:brians-opinion-needed
kw:browser
kw:bsd
kw:build
kw:build-helpers
kw:buildbot
kw:builders
kw:buildslave
kw:buildslaves
kw:cache
kw:cap
kw:capleak
kw:captcha
kw:cast
kw:centos
kw:cffi
kw:chacha
kw:charset
kw:check
kw:checker
kw:chroot
kw:ci
kw:clean
kw:cleanup
kw:cli
kw:cloud
kw:cloud-backend
kw:cmdline
kw:code
kw:code-checks
kw:coding-standards
kw:coding-tools
kw:coding_tools
kw:collection
kw:compatibility
kw:completion
kw:compression
kw:confidentiality
kw:config
kw:configuration
kw:configuration.txt
kw:conflict
kw:connection
kw:connectivity
kw:consistency
kw:content
kw:control
kw:control.furl
kw:convergence
kw:coordination
kw:copyright
kw:corruption
kw:cors
kw:cost
kw:coverage
kw:coveralls
kw:coveralls.io
kw:cpu-watcher
kw:cpyext
kw:crash
kw:crawler
kw:crawlers
kw:create-container
kw:cruft
kw:crypto
kw:cryptography
kw:cryptography-lib
kw:cryptopp
kw:csp
kw:curl
kw:cutoff-date
kw:cycle
kw:cygwin
kw:d3
kw:daemon
kw:darcs
kw:darcsver
kw:database
kw:dataloss
kw:db
kw:dead-code
kw:deb
kw:debian
kw:debug
kw:deep-check
kw:defaults
kw:deferred
kw:delete
kw:deletion
kw:denial-of-service
kw:dependency
kw:deployment
kw:deprecation
kw:desert-island
kw:desert-island-build
kw:design
kw:design-review-needed
kw:detection
kw:dev-infrastructure
kw:devpay
kw:directory
kw:directory-page
kw:dirnode
kw:dirnodes
kw:disconnect
kw:discovery
kw:disk
kw:disk-backend
kw:distribute
kw:distutils
kw:dns
kw:do_http
kw:doc-needed
kw:docker
kw:docs
kw:docs-needed
kw:dokan
kw:dos
kw:download
kw:downloader
kw:dragonfly
kw:drop-upload
kw:duplicity
kw:dusty
kw:earth-dragon
kw:easy
kw:ec2
kw:ecdsa
kw:ed25519
kw:egg-needed
kw:eggs
kw:eliot
kw:email
kw:empty
kw:encoding
kw:endpoint
kw:enterprise
kw:enum34
kw:environment
kw:erasure
kw:erasure-coding
kw:error
kw:escaping
kw:etag
kw:etch
kw:evangelism
kw:eventual
kw:example
kw:excess-authority
kw:exec
kw:exocet
kw:expiration
kw:extensibility
kw:extension
kw:failure
kw:fedora
kw:ffp
kw:fhs
kw:figleaf
kw:file
kw:file-descriptor
kw:filename
kw:filesystem
kw:fileutil
kw:fips
kw:firewall
kw:first
kw:floatingpoint
kw:flog
kw:foolscap
kw:forward-compatibility
kw:forward-secrecy
kw:forwarding
kw:free
kw:freebsd
kw:frontend
kw:fsevents
kw:ftp
kw:ftpd
kw:full
kw:furl
kw:fuse
kw:garbage
kw:garbage-collection
kw:gateway
kw:gatherer
kw:gc
kw:gcc
kw:gentoo
kw:get
kw:git
kw:git-annex
kw:github
kw:glacier
kw:globalcaps
kw:glossary
kw:google-cloud-storage
kw:google-drive-backend
kw:gossip
kw:governance
kw:grid
kw:grid-manager
kw:gridid
kw:gridsync
kw:grsec
kw:gsoc
kw:gvfs
kw:hackfest
kw:hacktahoe
kw:hang
kw:hardlink
kw:heartbleed
kw:heisenbug
kw:help
kw:helper
kw:hint
kw:hooks
kw:how
kw:how-to
kw:howto
kw:hp
kw:hp-cloud
kw:html
kw:http
kw:https
kw:i18n
kw:i2p
kw:i2p-collab
kw:illustration
kw:image
kw:immutable
kw:impressions
kw:incentives
kw:incident
kw:init
kw:inlineCallbacks
kw:inotify
kw:install
kw:installer
kw:integration
kw:integration-test
kw:integrity
kw:interactive
kw:interface
kw:interfaces
kw:interoperability
kw:interstellar-exploration
kw:introducer
kw:introduction
kw:iphone
kw:ipkg
kw:iputil
kw:ipv6
kw:irc
kw:jail
kw:javascript
kw:joke
kw:jquery
kw:json
kw:jsui
kw:junk
kw:key-value-store
kw:kfreebsd
kw:known-issue
kw:konqueror
kw:kpreid
kw:kvm
kw:l10n
kw:lae
kw:large
kw:latency
kw:leak
kw:leasedb
kw:leases
kw:libgmp
kw:license
kw:licenss
kw:linecount
kw:link
kw:linux
kw:lit
kw:localhost
kw:location
kw:locking
kw:logging
kw:logo
kw:loopback
kw:lucid
kw:mac
kw:macintosh
kw:magic-folder
kw:manhole
kw:manifest
kw:manual-test-needed
kw:map
kw:mapupdate
kw:max_space
kw:mdmf
kw:memcheck
kw:memory
kw:memory-leak
kw:mesh
kw:metadata
kw:meter
kw:migration
kw:mime
kw:mingw
kw:minimal
kw:misc
kw:miscapture
kw:mlp
kw:mock
kw:more-info-needed
kw:mountain-lion
kw:move
kw:multi-users
kw:multiple
kw:multiuser-gateway
kw:munin
kw:music
kw:mutability
kw:mutable
kw:mystery
kw:names
kw:naming
kw:nas
kw:navigation
kw:needs-review
kw:needs-spawn
kw:netbsd
kw:network
kw:nevow
kw:new-user
kw:newcaps
kw:news
kw:news-done
kw:news-needed
kw:newsletter
kw:newurls
kw:nfc
kw:nginx
kw:nixos
kw:no-clobber
kw:node
kw:node-url
kw:notification
kw:notifyOnDisconnect
kw:nsa310
kw:nsa320
kw:nsa325
kw:numpy
kw:objects
kw:old
kw:openbsd
kw:openitp-packaging
kw:openssl
kw:openstack
kw:opensuse
kw:operation-helpers
kw:operational
kw:operations
kw:ophandle
kw:ophandles
kw:ops
kw:optimization
kw:optional
kw:options
kw:organization
kw:os
kw:os.abort
kw:ostrom
kw:osx
kw:osxfuse
kw:otf-magic-folder-objective1
kw:otf-magic-folder-objective2
kw:otf-magic-folder-objective3
kw:otf-magic-folder-objective4
kw:otf-magic-folder-objective5
kw:otf-magic-folder-objective6
kw:p2p
kw:packaging
kw:partial
kw:password
kw:path
kw:paths
kw:pause
kw:peer-selection
kw:performance
kw:permalink
kw:permissions
kw:persistence
kw:phone
kw:pickle
kw:pip
kw:pipermail
kw:pkg_resources
kw:placement
kw:planning
kw:policy
kw:port
kw:portability
kw:portal
kw:posthook
kw:pratchett
kw:preformance
kw:preservation
kw:privacy
kw:process
kw:profile
kw:profiling
kw:progress
kw:proxy
kw:publish
kw:pyOpenSSL
kw:pyasn1
kw:pycparser
kw:pycrypto
kw:pycrypto-lib
kw:pycryptopp
kw:pyfilesystem
kw:pyflakes
kw:pylint
kw:pypi
kw:pypy
kw:pysqlite
kw:python
kw:python3
kw:pythonpath
kw:pyutil
kw:pywin32
kw:quickstart
kw:quiet
kw:quotas
kw:quoting
kw:raic
kw:rainhill
kw:random
kw:random-access
kw:range
kw:raspberry-pi
kw:reactor
kw:readonly
kw:rebalancing
kw:recovery
kw:recursive
kw:redhat
kw:redirect
kw:redressing
kw:refactor
kw:referer
kw:referrer
kw:regression
kw:rekey
kw:relay
kw:release
kw:release-blocker
kw:reliability
kw:relnotes
kw:remote
kw:removable
kw:removable-disk
kw:rename
kw:renew
kw:repair
kw:replace
kw:report
kw:repository
kw:research
kw:reserved_space
kw:response-needed
kw:response-time
kw:restore
kw:retrieve
kw:retry
kw:review
kw:review-needed
kw:reviewed
kw:revocation
kw:roadmap
kw:rollback
kw:rpm
kw:rsa
kw:rss
kw:rst
kw:rsync
kw:rusty
kw:s3
kw:s3-backend
kw:s3-frontend
kw:s4
kw:same-origin
kw:sandbox
kw:scalability
kw:scaling
kw:scheduling
kw:schema
kw:scheme
kw:scp
kw:scripts
kw:sdist
kw:sdmf
kw:security
kw:self-contained
kw:server
kw:servermap
kw:servers-of-happiness
kw:service
kw:setup
kw:setup.py
kw:setup_requires
kw:setuptools
kw:setuptools_darcs
kw:sftp
kw:shared
kw:shareset
kw:shell
kw:signals
kw:simultaneous
kw:six
kw:size
kw:slackware
kw:slashes
kw:smb
kw:sneakernet
kw:snowleopard
kw:socket
kw:solaris
kw:space
kw:space-efficiency
kw:spam
kw:spec
kw:speed
kw:sqlite
kw:ssh
kw:ssh-keygen
kw:sshfs
kw:ssl
kw:stability
kw:standards
kw:start
kw:startup
kw:static
kw:static-analysis
kw:statistics
kw:stats
kw:stats_gatherer
kw:status
kw:stdeb
kw:storage
kw:streaming
kw:strports
kw:style
kw:stylesheet
kw:subprocess
kw:sumo
kw:survey
kw:svg
kw:symlink
kw:synchronous
kw:tac
kw:tahoe-*
kw:tahoe-add-alias
kw:tahoe-admin
kw:tahoe-archive
kw:tahoe-backup
kw:tahoe-check
kw:tahoe-cp
kw:tahoe-create-alias
kw:tahoe-create-introducer
kw:tahoe-debug
kw:tahoe-deep-check
kw:tahoe-deepcheck
kw:tahoe-lafs-trac-stream
kw:tahoe-list-aliases
kw:tahoe-ls
kw:tahoe-magic-folder
kw:tahoe-manifest
kw:tahoe-mkdir
kw:tahoe-mount
kw:tahoe-mv
kw:tahoe-put
kw:tahoe-restart
kw:tahoe-rm
kw:tahoe-run
kw:tahoe-start
kw:tahoe-stats
kw:tahoe-unlink
kw:tahoe-webopen
kw:tahoe.css
kw:tahoe_files
kw:tahoewapi
kw:tarball
kw:tarballs
kw:tempfile
kw:templates
kw:terminology
kw:test
kw:test-and-set
kw:test-from-egg
kw:test-needed
kw:testgrid
kw:testing
kw:tests
kw:throttling
kw:ticket999-s3-backend
kw:tiddly
kw:time
kw:timeout
kw:timing
kw:to
kw:to-be-closed-on-2011-08-01
kw:tor
kw:tor-protocol
kw:torsocks
kw:tox
kw:trac
kw:transparency
kw:travis
kw:travis-ci
kw:trial
kw:trickle
kw:trivial
kw:truckee
kw:tub
kw:tub.location
kw:twine
kw:twistd
kw:twistd.log
kw:twisted
kw:twisted-14
kw:twisted-trial
kw:twitter
kw:twn
kw:txaws
kw:type
kw:typeerror
kw:ubuntu
kw:ucwe
kw:ueb
kw:ui
kw:unclean
kw:uncoordinated-writes
kw:undeletable
kw:unfinished-business
kw:unhandled-error
kw:unhappy
kw:unicode
kw:unit
kw:unix
kw:unlink
kw:update
kw:upgrade
kw:upload
kw:upload-helper
kw:uri
kw:url
kw:usability
kw:use-case
kw:utf-8
kw:util
kw:uwsgi
kw:ux
kw:validation
kw:variables
kw:vdrive
kw:verify
kw:verlib
kw:version
kw:versioning
kw:versions
kw:video
kw:virtualbox
kw:virtualenv
kw:vista
kw:visualization
kw:visualizer
kw:vm
kw:volunteergrid2
kw:volunteers
kw:vpn
kw:wapi
kw:warners-opinion-needed
kw:warning
kw:weapi
kw:web
kw:web.port
kw:webapi
kw:webdav
kw:webdrive
kw:webport
kw:websec
kw:website
kw:websocket
kw:welcome
kw:welcome-page
kw:welcomepage
kw:wiki
kw:win32
kw:win64
kw:windows
kw:windows-related
kw:winscp
kw:workaround
kw:world-domination
kw:wrapper
kw:write-enabler
kw:wui
kw:x86
kw:x86-64
kw:xhtml
kw:xml
kw:xss
kw:zbase32
kw:zetuptoolz
kw:zfec
kw:zookos-opinion-needed
kw:zope
kw:zope.interface
p/blocker
p/critical
p/major
p/minor
p/normal
p/supercritical
p/trivial
r/cannot reproduce
r/duplicate
r/fixed
r/invalid
r/somebody else's problem
r/was already fixed
r/wontfix
r/worksforme
t/defect
t/enhancement
t/task
v/0.2.0
v/0.3.0
v/0.4.0
v/0.5.0
v/0.5.1
v/0.6.0
v/0.6.1
v/0.7.0
v/0.8.0
v/0.9.0
v/1.0.0
v/1.1.0
v/1.10.0
v/1.10.1
v/1.10.2
v/1.10a2
v/1.11.0
v/1.12.0
v/1.12.1
v/1.13.0
v/1.14.0
v/1.15.0
v/1.15.1
v/1.2.0
v/1.3.0
v/1.4.1
v/1.5.0
v/1.6.0
v/1.6.1
v/1.7.0
v/1.7.1
v/1.7β
v/1.8.0
v/1.8.1
v/1.8.2
v/1.8.3
v/1.8β
v/1.9.0
v/1.9.0-s3branch
v/1.9.0a1
v/1.9.0a2
v/1.9.0b1
v/1.9.1
v/1.9.2
v/1.9.2a1
v/cloud-branch
v/unknown
No milestone
No project
No assignees
4 participants
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: tahoe-lafs/trac#1220
No description provided.