have buildslaves automatically build debian packages of foolscap, zfec, pycryptopp, pyutil, argparse, zbase32 #769

Closed
opened 2009-07-21 03:44:53 +00:00 by warner · 47 comments

I just tried to run tahoe --version from a .deb I built from trunk (on my sid system), and it fails with the following error:

pkg_resources.VersionConflict: (pycryptopp 0.5.2-1 (/usr/lib/python2.5/site-packages), Requirement.parse('pycryptopp>=0.5.15'))

I've updated the debian/control files to make this requirement visible to the debian package manager (i.e. it should not let me install this tahoe), but apart from that, we need updated pycryptopp debs before we can release 1.5.0 . We need them for all of the same platforms for which we provide Tahoe debs: etch/edgy/feisty/gutsy/hardy/hardy-amd64 .

I think we're ok with zfec and foolscap packages.

I just tried to run `tahoe --version` from a .deb I built from trunk (on my sid system), and it fails with the following error: ``` pkg_resources.VersionConflict: (pycryptopp 0.5.2-1 (/usr/lib/python2.5/site-packages), Requirement.parse('pycryptopp>=0.5.15')) ``` I've updated the debian/control files to make this requirement visible to the debian package manager (i.e. it should not let me install this tahoe), but apart from that, we need updated pycryptopp debs before we can release 1.5.0 . We need them for all of the same platforms for which we provide Tahoe debs: etch/edgy/feisty/gutsy/hardy/hardy-amd64 . I think we're ok with zfec and foolscap packages.
warner added the
c/packaging
p/critical
t/defect
v/1.4.1
labels 2009-07-21 03:44:53 +00:00
warner added this to the 1.5.0 milestone 2009-07-21 03:44:53 +00:00
zooko was assigned by warner 2009-07-21 03:44:53 +00:00

Whenever a new patch is committed to the pycryptopp repository, the buildbot builds new .deb packages for a couple of platforms -- Debian-unstable-i386 and Ubuntu-Jaunty-amd64:

http://allmydata.org/buildbot-pycryptopp/builders/BlackDew%20debian-unstable-i386/builds/10/steps/stdeb/logs/stdio

Probably the resulting .deb would work for you. How shall we arrange for it to be installed into an apt repository where your apt-get will find it?

Whenever a new patch is committed to the pycryptopp repository, the buildbot builds new .deb packages for a couple of platforms -- Debian-unstable-i386 and Ubuntu-Jaunty-amd64: <http://allmydata.org/buildbot-pycryptopp/builders/BlackDew%20debian-unstable-i386/builds/10/steps/stdeb/logs/stdio> Probably the resulting .deb would work for you. How shall we arrange for it to be installed into an apt repository where your `apt-get` will find it?
arthur commented 2009-07-21 13:40:02 +00:00
Owner

here is a copy of the debian files generated for i386 :

http://testgrid.allmydata.org:3567/uri/URI%3ACHK%3Aaws3mawi42itcztaotbeguefry%3A5nufi4nprsitbklcimg3rokoaxccitrpr4xi7yapjvbcpr672x4a%3A3%3A10%3A5438390

Content :
pycryptopp_0.5.15-1.diff.gz
pycryptopp_0.5.15-1.dsc
pycryptopp_0.5.15-1_i386.build
pycryptopp_0.5.15-1_i386.changes
pycryptopp_0.5.15-1.tar.gz
pycryptopp_0.5.15_i386.build
pycryptopp_0.5.15.orig.tar.gz
pycryptopp_0.5.15.tar.gz
pycryptopp-0.5.15.tar.gz
python-pycryptopp_0.5.15-1_i386.deb
python-pycryptopp-dbg_0.5.15-1_i386.deb

here is a copy of the debian files generated for i386 : <http://testgrid.allmydata.org:3567/uri/URI%3ACHK%3Aaws3mawi42itcztaotbeguefry%3A5nufi4nprsitbklcimg3rokoaxccitrpr4xi7yapjvbcpr672x4a%3A3%3A10%3A5438390> Content : pycryptopp_0.5.15-1.diff.gz pycryptopp_0.5.15-1.dsc pycryptopp_0.5.15-1_i386.build pycryptopp_0.5.15-1_i386.changes pycryptopp_0.5.15-1.tar.gz pycryptopp_0.5.15_i386.build pycryptopp_0.5.15.orig.tar.gz pycryptopp_0.5.15.tar.gz pycryptopp-0.5.15.tar.gz python-pycryptopp_0.5.15-1_i386.deb python-pycryptopp-dbg_0.5.15-1_i386.deb
Author

I updated the tahoe debian/control rules to declare a dependency upon pycryptopp-0.5.15: the tahoe .debs now correctly refuse to install.

Zooko: I set up a bunch of "flappserver upload-file" services for uploading tahoe .debs to the repository on hanford (which gets mirrored to allmydata.org). Let's do the same for the pycryptopp debs.

In buildslave@hanford, do this one or more times:

flappserver add ~/.flappserver -C DIST/main/binary-i386 upload-file ~/tahoe-debs/dists/DIST/main/binary-i386

(setting DIST appropriately for each time)

Each time you run that, it will emit a furl. Copy this to the buildslave, stored in something like ~/main-deb.furl . Then add an upload step to the buildmaster config, which runs:

flappclient --furlfile ~/main-deb.furl upload-file ../pycryptopp*.deb

You should also copy the "tahoe-update-apt.furl" and have a step to update the APT index by running

flappclient --furlfile ~/tahoe-update-apt.furl run-command

(take a look at any of the tahoe deb builders for an example)

I don't know how you ought manage the variety of platforms, though. We build
and host tahoe debs for etch/edgy/feisty/gutsy/hardy/hardy-amd64, so it'd be
nice to provide pycryptopp debs for all of those. The tahoe .deb process only
actually has rules for etch, lenny, and sid: basically we use the etch rules
for the py2.4 platforms (etch and edgy), sid for sid, and lenny for
everything else. But we run those rules on actual edgy/feisty/etc systems so
they declare the right dependencies (in general you can't install a newer deb
on an older system, because it will declare a dependency upon newer versions
of libc, etc).

You'd need to find some similar mapping for pycryptopp, and create debs for
all the same platforms we do for Tahoe. It sounds like you're only currently
creating debs for one of those platforms (sid), and that .deb probably won't
install on any of the earlier ones.

I updated the tahoe debian/control rules to declare a dependency upon pycryptopp-0.5.15: the tahoe .debs now correctly refuse to install. Zooko: I set up a bunch of "flappserver upload-file" services for uploading tahoe .debs to the repository on hanford (which gets mirrored to allmydata.org). Let's do the same for the pycryptopp debs. In `buildslave@hanford`, do this one or more times: ``` flappserver add ~/.flappserver -C DIST/main/binary-i386 upload-file ~/tahoe-debs/dists/DIST/main/binary-i386 ``` (setting DIST appropriately for each time) Each time you run that, it will emit a furl. Copy this to the buildslave, stored in something like ~/main-deb.furl . Then add an upload step to the buildmaster config, which runs: ``` flappclient --furlfile ~/main-deb.furl upload-file ../pycryptopp*.deb ``` You should also copy the "tahoe-update-apt.furl" and have a step to update the APT index by running ``` flappclient --furlfile ~/tahoe-update-apt.furl run-command ``` (take a look at any of the tahoe deb builders for an example) I don't know how you ought manage the variety of platforms, though. We build and host tahoe debs for etch/edgy/feisty/gutsy/hardy/hardy-amd64, so it'd be nice to provide pycryptopp debs for all of those. The tahoe .deb process only actually has rules for etch, lenny, and sid: basically we use the etch rules for the py2.4 platforms (etch and edgy), sid for sid, and lenny for everything else. But we run those rules on actual edgy/feisty/etc systems so they declare the right dependencies (in general you can't install a newer deb on an older system, because it will declare a dependency upon newer versions of libc, etc). You'd need to find some similar mapping for pycryptopp, and create debs for all the same platforms we do for Tahoe. It sounds like you're only currently creating debs for one of those platforms (sid), and that .deb probably won't install on any of the earlier ones.
Author

Oh, or of course you could just log in to each of our debian/ubuntu buildslaves in turn, build .deb/.orig.tar.gz/.diff.gz packages, scp them over to hanford:~buildslave/tahoe-debs/... , and run 'make' to update the index and remirror to org. Any level of automation you want to apply to this process would make your life easier in the long run.

Oh, or of course you could just log in to each of our debian/ubuntu buildslaves in turn, build .deb/.orig.tar.gz/.diff.gz packages, scp them over to hanford:~buildslave/tahoe-debs/... , and run 'make' to update the index and remirror to org. Any level of automation you want to apply to this process would make your life easier in the long run.

Hm, I don't personally own or operate any of the Debian buildslaves currently connected to the pycryptopp buildbot -- http://allmydata.org/buildbot-pycryptopp/waterfall -- just the jaunty/amd64 one (yukyuk).

I guess in order to automatically produce .deb's for these various flavors of Debian, we need to add buildslaves for pycryptopp running each flavor? I'm not going to make time in the forseeable future to set up lots of buildslaves for various flavors of Debian, although I can definitely do Jaunty/amd64 right now...

Hm, the first step in your recipe doesn't work:

buildslave@hanford:~$ flappserver add ~/.flappserver -C jaunty/main/binary-amd64 upload-file ~/tahoe-debs/dists/jaunty/main/binary-amd64
Traceback (most recent call last):
  File "/usr/bin/flappserver", line 18, in <module>
    run_flappserver()
  File "/usr/lib/python2.5/site-packages/foolscap/appserver/cli.py", line 471, in run_flappserver
    r = dispatch(command, so)
  File "/usr/lib/python2.5/site-packages/foolscap/appserver/cli.py", line 440, in dispatch
    return c.run(options)
  File "/usr/lib/python2.5/site-packages/foolscap/appserver/cli.py", line 191, in run
    s = build_service(service_basedir, None, service_type, service_args)
  File "/usr/lib/python2.5/site-packages/foolscap/appserver/services.py", line 279, in build_service
    raise UnknownServiceType(service_type)
foolscap.appserver.services.UnknownServiceType: -C

Anyway, as I was saying, I'll spend some time setting up automation to produce .deb's for jaunty/amd64, and I'll also set up a server owned by allmydata.com for lenny/i386, but I don't care about the various other flavors of Debian enough to set up .deb-producing automation for them. If someone else wants to volunteer to do the buildslave side of this recipe for a Debian flavor that they care about then I'll do the buildmaster side of the recipe.

Hm, I don't personally own or operate any of the Debian buildslaves currently connected to the pycryptopp buildbot -- <http://allmydata.org/buildbot-pycryptopp/waterfall> -- just the jaunty/amd64 one (yukyuk). I guess in order to automatically produce .deb's for these various flavors of Debian, we need to add buildslaves for pycryptopp running each flavor? I'm not going to make time in the forseeable future to set up lots of buildslaves for various flavors of Debian, although I can definitely do Jaunty/amd64 right now... Hm, the first step in your recipe doesn't work: ``` buildslave@hanford:~$ flappserver add ~/.flappserver -C jaunty/main/binary-amd64 upload-file ~/tahoe-debs/dists/jaunty/main/binary-amd64 Traceback (most recent call last): File "/usr/bin/flappserver", line 18, in <module> run_flappserver() File "/usr/lib/python2.5/site-packages/foolscap/appserver/cli.py", line 471, in run_flappserver r = dispatch(command, so) File "/usr/lib/python2.5/site-packages/foolscap/appserver/cli.py", line 440, in dispatch return c.run(options) File "/usr/lib/python2.5/site-packages/foolscap/appserver/cli.py", line 191, in run s = build_service(service_basedir, None, service_type, service_args) File "/usr/lib/python2.5/site-packages/foolscap/appserver/services.py", line 279, in build_service raise UnknownServiceType(service_type) foolscap.appserver.services.UnknownServiceType: -C ``` Anyway, as I was saying, I'll spend some time setting up automation to produce .deb's for jaunty/amd64, and I'll also set up a server owned by allmydata.com for lenny/i386, but I don't care about the various other flavors of Debian enough to set up .deb-producing automation for them. If someone else wants to volunteer to do the buildslave side of this recipe for a Debian flavor that they care about then I'll do the buildmaster side of the recipe.

I set up a pycryptopp buildslave on a server owned by allmydata.com named "slave-etch":

http://allmydata.org/buildbot-pycryptopp/builders/allmydata.com%20debian-etch-i686

I looked around, but allmydata.com doesn't seem to run a Debian lenny server. As soon as Brian tells me what's wrong with the flappserver setup (above), I'll set that up to receive .deb's at hanford.

Meanwhile, does someone want to volunteer to run a Debian lenny (5.0) buildslave? We already have a Debian sid (unstable) buildslave thanks to Black Dew:

http://allmydata.org/buildbot-pycryptopp/builders/BlackDew%20debian-unstable-i386/builds/11/steps/stdeb/logs/stdio

Oh, I see that "stdeb 0.3-bdew-custom-fixes (/usr/lib/python2.5/site-packages/stdeb-0.3_bdew_custom_fixes-py2.5.egg)" doesn't satisfy the requirement that I just added of stdeb >= 0.3. :-) I'll loosen that requirement to just stdeb. Black Dew: what custom fixes are in that package? I think you reported them all to stdeb upstream (Andrew Straw), but I'm not sure. Black Dew: would you be willing to set up a flapp client to upload the resulting .deb's for Debuan unstable to our apt repository?

I set up a pycryptopp buildslave on a server owned by allmydata.com named "slave-etch": <http://allmydata.org/buildbot-pycryptopp/builders/allmydata.com%20debian-etch-i686> I looked around, but allmydata.com doesn't seem to run a Debian lenny server. As soon as Brian tells me what's wrong with the flappserver setup (above), I'll set that up to receive .deb's at hanford. Meanwhile, does someone want to volunteer to run a Debian lenny (5.0) buildslave? We already have a Debian sid (unstable) buildslave thanks to Black Dew: <http://allmydata.org/buildbot-pycryptopp/builders/BlackDew%20debian-unstable-i386/builds/11/steps/stdeb/logs/stdio> Oh, I see that "stdeb 0.3-bdew-custom-fixes (/usr/lib/python2.5/site-packages/stdeb-0.3_bdew_custom_fixes-py2.5.egg)" doesn't satisfy the requirement that I just added of `stdeb >= 0.3`. :-) I'll loosen that requirement to just `stdeb`. Black Dew: what custom fixes are in that package? I think you reported them all to stdeb upstream (Andrew Straw), but I'm not sure. Black Dew: would you be willing to set up a flapp client to upload the resulting .deb's for Debuan unstable to our apt repository?
Author

sorry, use lowercase "-c" or "--comment", not uppercase "-C".

You could manually build pycryptopp debs on each of the allmydata-owned debian buildslaves: we have one for each debian release that we make tahoe .debs for (obviously). This is what I do each time I make a new foolscap release, regardless of whether tahoe depends upon it the new version or not. I have a script to reduce the typing involved, though. The list is in the AdminServers on the (allmydata-only) dev trac instance, or in the comments on the buildmaster config's slavename list.

The reason I'm concerned about this is because the tahoe debs that we've been building for years now are useless without all of the support debs. pycryptopp is the only one that we're missing. Until we have updated pycryptopp debs, users of those older platforms (including Hardy, the current Ubuntu long-term-support release) will be forced to build their own, a relatively difficult and annoying process for someone used to simply typing "apt-get install allmydata-tahoe". In addition, servers which are already running hardy (such as the allmydata.com hosts) will be unable to upgrade until their admins manually build pycryptopp debs.

sorry, use lowercase "-c" or "--comment", not uppercase "-C". You could manually build pycryptopp debs on each of the allmydata-owned debian buildslaves: we have one for each debian release that we make tahoe .debs for (obviously). This is what I do each time I make a new foolscap release, regardless of whether tahoe depends upon it the new version or not. I have a script to reduce the typing involved, though. The list is in the `AdminServers` on the (allmydata-only) dev trac instance, or in the comments on the buildmaster config's slavename list. The reason I'm concerned about this is because the tahoe debs that we've been building for years now are useless without all of the support debs. pycryptopp is the only one that we're missing. Until we have updated pycryptopp debs, users of those older platforms (including Hardy, the current Ubuntu long-term-support release) will be forced to build their own, a relatively difficult and annoying process for someone used to simply typing "apt-get install allmydata-tahoe". In addition, servers which are already running hardy (such as the allmydata.com hosts) will be unable to upgrade until their admins manually build pycryptopp debs.

I would be happy for Tahoe-LAFS (and pycryptopp) to be apt-get install'able by people who use various Debian platforms and who configure their /etc/apt/sources.list to point to allmydata.org's apt repositories. However, I have very little time to work on fun stuff (Tahoe-LAFS) nowadays, and manually building .deb's doesn't qualify as "fun". If someone pays me to do it, or if someone else does it, then .deb's will manually get built. Fortunately, setting up automation to automatically build .deb's counts as "fun". ;-)

I would be happy for Tahoe-LAFS (and pycryptopp) to be apt-get install'able by people who use various Debian platforms and who configure their `/etc/apt/sources.list` to point to allmydata.org's apt repositories. However, I have very little time to work on fun stuff (Tahoe-LAFS) nowadays, and manually building .deb's doesn't qualify as "fun". If someone pays me to do it, or if someone else does it, then .deb's will manually get built. Fortunately, setting up automation to automatically build .deb's counts as "fun". ;-)

sorry, use lowercase "-c" or "--comment", not uppercase "-C".

I thought I tried that...

Yes, I did:

buildslave@hanford:~$ flappserver add ~/.flappserver -c jaunty/main/binary-amd64 upload-file ~/tahoe-debs/dists/jaunty/main/binary-amd64
Traceback (most recent call last):
  File "/usr/bin/flappserver", line 18, in <module>
    run_flappserver()
  File "/usr/lib/python2.5/site-packages/foolscap/appserver/cli.py", line 471, in run_flappserver
    r = dispatch(command, so)
  File "/usr/lib/python2.5/site-packages/foolscap/appserver/cli.py", line 440, in dispatch
    return c.run(options)
  File "/usr/lib/python2.5/site-packages/foolscap/appserver/cli.py", line 191, in run
    s = build_service(service_basedir, None, service_type, service_args)
  File "/usr/lib/python2.5/site-packages/foolscap/appserver/services.py", line 279, in build_service
    raise UnknownServiceType(service_type)
foolscap.appserver.services.UnknownServiceType: -c
buildslave@hanford:~$ flappserver add ~/.flappserver --comment jaunty/main/binary-amd64 upload-file ~/tahoe-debs/dists/jaunty/main/binary-amd64
Traceback (most recent call last):
  File "/usr/bin/flappserver", line 18, in <module>
    run_flappserver()
  File "/usr/lib/python2.5/site-packages/foolscap/appserver/cli.py", line 471, in run_flappserver
    r = dispatch(command, so)
  File "/usr/lib/python2.5/site-packages/foolscap/appserver/cli.py", line 440, in dispatch
    return c.run(options)
  File "/usr/lib/python2.5/site-packages/foolscap/appserver/cli.py", line 191, in run
    s = build_service(service_basedir, None, service_type, service_args)
  File "/usr/lib/python2.5/site-packages/foolscap/appserver/services.py", line 279, in build_service
    raise UnknownServiceType(service_type)
foolscap.appserver.services.UnknownServiceType: --comment

Since I get only about an hour to work on Tahoe-LAFS every day (at most, and in several small bursts), and since you and I are out of sync so that it takes about 24 hours for every round trip between us, maybe you could set up the flappserver on hanford?

Or, if you don't want to, then please write back and tell me what's wrong with my current flappserver setup command-line. :-)

> sorry, use lowercase "-c" or "--comment", not uppercase "-C". I thought I tried that... Yes, I did: ``` buildslave@hanford:~$ flappserver add ~/.flappserver -c jaunty/main/binary-amd64 upload-file ~/tahoe-debs/dists/jaunty/main/binary-amd64 Traceback (most recent call last): File "/usr/bin/flappserver", line 18, in <module> run_flappserver() File "/usr/lib/python2.5/site-packages/foolscap/appserver/cli.py", line 471, in run_flappserver r = dispatch(command, so) File "/usr/lib/python2.5/site-packages/foolscap/appserver/cli.py", line 440, in dispatch return c.run(options) File "/usr/lib/python2.5/site-packages/foolscap/appserver/cli.py", line 191, in run s = build_service(service_basedir, None, service_type, service_args) File "/usr/lib/python2.5/site-packages/foolscap/appserver/services.py", line 279, in build_service raise UnknownServiceType(service_type) foolscap.appserver.services.UnknownServiceType: -c buildslave@hanford:~$ flappserver add ~/.flappserver --comment jaunty/main/binary-amd64 upload-file ~/tahoe-debs/dists/jaunty/main/binary-amd64 Traceback (most recent call last): File "/usr/bin/flappserver", line 18, in <module> run_flappserver() File "/usr/lib/python2.5/site-packages/foolscap/appserver/cli.py", line 471, in run_flappserver r = dispatch(command, so) File "/usr/lib/python2.5/site-packages/foolscap/appserver/cli.py", line 440, in dispatch return c.run(options) File "/usr/lib/python2.5/site-packages/foolscap/appserver/cli.py", line 191, in run s = build_service(service_basedir, None, service_type, service_args) File "/usr/lib/python2.5/site-packages/foolscap/appserver/services.py", line 279, in build_service raise UnknownServiceType(service_type) foolscap.appserver.services.UnknownServiceType: --comment ``` Since I get only about an hour to work on Tahoe-LAFS every day (at most, and in several small bursts), and since you and I are out of sync so that it takes about 24 hours for every round trip between us, maybe you could set up the flappserver on hanford? Or, if you don't want to, then please write back and tell me what's wrong with my current flappserver setup command-line. :-)

We now have three Debian buildslaves which are successfully generating .deb's:

http://allmydata.org/buildbot-pycryptopp/builders/linux-amd64-ubuntu-jaunty-yukyuk
http://allmydata.org/buildbot-pycryptopp/builders/BlackDew%20debian-unstable-i386
http://allmydata.org/buildbot-pycryptopp/builders/allmydata.com%20debian-etch-i686

The one that I don't have a login to is the one that I most want to have .deb's for -- Debian unstable. Hopefully Black Dew will be willing to set up the client side flapp service to upload the .deb's that his buildslave is building.

We now have three Debian buildslaves which are successfully generating .deb's: <http://allmydata.org/buildbot-pycryptopp/builders/linux-amd64-ubuntu-jaunty-yukyuk> <http://allmydata.org/buildbot-pycryptopp/builders/BlackDew%20debian-unstable-i386> <http://allmydata.org/buildbot-pycryptopp/builders/allmydata.com%20debian-etch-i686> The one that I don't have a login to is the one that I most want to have .deb's for -- Debian unstable. Hopefully Black Dew will be willing to set up the client side flapp service to upload the .deb's that his buildslave is building.

To my surprise, Hardy and Etch have been used a lot more than Jaunty, Lenny, or Sid in the last three months since the Tahoe-LAFS v1.4 release:

http://allmydata.org/pipermail/tahoe-dev/2009-July/002384.html

To my surprise, Hardy and Etch have been used a lot more than Jaunty, Lenny, or Sid in the last three months since the Tahoe-LAFS v1.4 release: <http://allmydata.org/pipermail/tahoe-dev/2009-July/002384.html>

Fixed statistics:

http://allmydata.org/pipermail/tahoe-dev/2009-July/002386.html

If this is true it means that Brian is more or less the only user of Tahoe-LAFS on Debian sid. :-)

Fixed statistics: <http://allmydata.org/pipermail/tahoe-dev/2009-July/002386.html> If this is true it means that Brian is more or less the only user of Tahoe-LAFS on Debian sid. :-)

Arthur pointed out that people use older apt repos than their actual debian dist:

http://allmydata.org/pipermail/tahoe-dev/2009-July/002387.html

And he volunteered a Debian lenny buildslave:

http://allmydata.org/buildbot-pycryptopp/builders/Arthur%20debian-lenny-c7-i386

Arthur pointed out that people use older apt repos than their actual debian dist: <http://allmydata.org/pipermail/tahoe-dev/2009-July/002387.html> And he volunteered a Debian lenny buildslave: <http://allmydata.org/buildbot-pycryptopp/builders/Arthur%20debian-lenny-c7-i386>

So I skipped the --comment option, which I couldn't figure out, but it worked without the --comment option. Now we have automatic upload of pycryptopp .deb's and automatic rebuild of the apt repo index so that "apt-get" can find and download the .deb's! So far the jaunty-amd64 buildslave (my workstation) is the only one doing this:

http://allmydata.org/buildbot-pycryptopp/builders/linux-amd64-ubuntu-jaunty-yukyuk

The next step is for other buildslave operators to agree to install the furl files into their buildslave base directory, and then I'll set the buildmaster to invoke the "upload-deb" and "update-apt-repo" on those buildslaves. The furl files are confidential (knowledge of them gives the ability to upload the debs and regenerate the apt repo index, respectively), so write to zooko@zooko.com asking for yours and I'll e-mail you yours back.

So I skipped the `--comment` option, which I couldn't figure out, but it worked without the `--comment` option. Now we have automatic upload of pycryptopp .deb's and automatic rebuild of the apt repo index so that "apt-get" can find and download the .deb's! So far the jaunty-amd64 buildslave (my workstation) is the only one doing this: <http://allmydata.org/buildbot-pycryptopp/builders/linux-amd64-ubuntu-jaunty-yukyuk> The next step is for other buildslave operators to agree to install the furl files into their buildslave base directory, and then I'll set the buildmaster to invoke the "upload-deb" and "update-apt-repo" on those buildslaves. The furl files are confidential (knowledge of them gives the ability to upload the debs and regenerate the apt repo index, respectively), so write to zooko@zooko.com asking for yours and I'll e-mail you yours back.

Hm, sudo apt-get install python-pycryptopp doesn't work on my system even after the buildbot appears to have uploaded the pycryptopp .deb and updated the apt repo for jaunty on amd64 in this step:

http://allmydata.org/buildbot-pycryptopp/builders/linux-amd64-ubuntu-jaunty-yukyuk/builds/23/steps/update%20apt%20repo/logs/stdio

It says:

"""
MAIN yukyuk:~$ sudo apt-get install python-pycryptopp
Reading package lists... 0%
Reading package lists... 3%
Reading package lists... Done
Building dependency tree
Reading state information... Done

Package python-pycryptopp is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
E: Package python-pycryptopp has no installation candidate
"""

My /etc/apt/sources.list includes:

deb http://allmydata.org/debian/  jaunty  main tahoe

Is there a manual step that needs to be done on the apt repository server to initialize the jaunty repo for the first time? (I think before this automated step started getting run there were no "jaunty"-flavored debs on this server.)

By the way, once we have the new automated apt-repo for pycryptopp sorted out then we need to update this wiki page, adding jaunty and setting the flags correctly for the imminent v1.5.0 release:

http://allmydata.org/trac/tahoe/wiki/DownloadDebianPackages

Hm, `sudo apt-get install python-pycryptopp` doesn't work on my system even after the buildbot appears to have uploaded the pycryptopp .deb and updated the apt repo for jaunty on amd64 in this step: <http://allmydata.org/buildbot-pycryptopp/builders/linux-amd64-ubuntu-jaunty-yukyuk/builds/23/steps/update%20apt%20repo/logs/stdio> It says: """ MAIN yukyuk:~$ sudo apt-get install python-pycryptopp Reading package lists... 0% Reading package lists... 3% Reading package lists... Done Building dependency tree Reading state information... Done Package python-pycryptopp is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package python-pycryptopp has no installation candidate """ My `/etc/apt/sources.list` includes: ``` deb http://allmydata.org/debian/ jaunty main tahoe ``` Is there a manual step that needs to be done on the apt repository server to initialize the jaunty repo for the first time? (I think before this automated step started getting run there were no "jaunty"-flavored debs on this server.) By the way, once we have the new automated apt-repo for pycryptopp sorted out then we need to update this wiki page, adding jaunty and setting the flags correctly for the imminent v1.5.0 release: <http://allmydata.org/trac/tahoe/wiki/DownloadDebianPackages>
Owner

Oh, I see that "stdeb 0.3-bdew-custom-fixes (/usr/lib/python2.5/site-packages/stdeb-0.3_bdew_custom_fixes-py2.5.egg)" doesn't satisfy the requirement that I just added of stdeb >= 0.3. :-) I'll loosen that requirement to just stdeb. Black Dew: what custom fixes are in that package? I think you reported them all to stdeb upstream (Andrew Straw), but I'm not sure.

The fixes there were needed to make it work with extras-require - and it was removed from the packages because of the problems IIRC, so it should work with unmodified stdeb i think.

I'll try replacing it with the original and see how it works.

Black Dew: would you be willing to set up a flapp client to upload the resulting .deb's for Debuan unstable to our apt repository?

Sure, send me the furl and i'll install flappclient later today.

> Oh, I see that "stdeb 0.3-bdew-custom-fixes (/usr/lib/python2.5/site-packages/stdeb-0.3_bdew_custom_fixes-py2.5.egg)" doesn't satisfy the requirement that I just added of `stdeb >= 0.3`. :-) I'll loosen that requirement to just `stdeb`. Black Dew: what custom fixes are in that package? I think you reported them all to stdeb upstream (Andrew Straw), but I'm not sure. The fixes there were needed to make it work with extras-require - and it was removed from the packages because of the problems IIRC, so it should work with unmodified stdeb i think. I'll try replacing it with the original and see how it works. > Black Dew: would you be willing to set up a flapp client to upload the resulting .deb's for Debuan unstable to our apt repository? Sure, send me the furl and i'll install flappclient later today.
Owner

The fixes there were needed to make it work with extras-require - and it was removed from the packages because of the problems IIRC, so it should work with unmodified stdeb i think.

Ok i've checked and my changes aren't needed for now, bu there's a patch from Zooko (http://github.com/astraw/stdeb/issues/unreads#issue/2) that fixes a bug with pathes that contain spaces, and it's not included in 3.0

Also note that currently stdeb on zfec fails for me with:

The required version of setuptools (>=0.6c12dev) is not available, and
can't be installed while this script is running. Please install
 a more recent version first, using 'easy_install -U setuptools'.

(Currently using setuptools 0.6c9 (/usr/lib/python2.4/site-packages))

This happens because stdeb loads setuptools manually before calling setup.py, and setup.py gets the system default version, instead of the one it has bundled.

I can install 0.6c12dev as a default on my buildslave and it'll probably build the packages, but this isn't something to be expected on end users systems if they want to build the debs themself.

> The fixes there were needed to make it work with extras-require - and it was removed from the packages because of the problems IIRC, so it should work with unmodified stdeb i think. Ok i've checked and my changes aren't needed for now, bu there's a patch from Zooko (<http://github.com/astraw/stdeb/issues/unreads#issue/2>) that fixes a bug with pathes that contain spaces, and it's not included in 3.0 Also note that currently stdeb on zfec fails for me with: ``` The required version of setuptools (>=0.6c12dev) is not available, and can't be installed while this script is running. Please install a more recent version first, using 'easy_install -U setuptools'. (Currently using setuptools 0.6c9 (/usr/lib/python2.4/site-packages)) ``` This happens because stdeb loads setuptools manually before calling setup.py, and setup.py gets the system default version, instead of the one it has bundled. I can install 0.6c12dev as a default on my buildslave and it'll probably build the packages, but this isn't something to be expected on end users systems if they want to build the debs themself.

In the long term we'll probably switch to the new fork of setuptools named "Distribute" http://mail.python.org/pipermail/distutils-sig/2009-July/012665.html and contribute patches to that project.

For now it wouldn't hurt for you to install setuptools (actually zetuptoolz 0.6c12dev on your system, but on the other hand we already have .deb's for zfec so it is okay if your system can't produce .deb's of zfec at the moment.

In the long term we'll probably switch to the new fork of setuptools named "Distribute" <http://mail.python.org/pipermail/distutils-sig/2009-July/012665.html> and contribute patches to that project. For now it wouldn't hurt for you to install `setuptools` (actually `zetuptoolz` 0.6c12dev on your system, but on the other hand we already have .deb's for `zfec` so it is okay if your system can't produce .deb's of `zfec` at the moment.

Black Dew's debian-unstable buildslave is now trying to upload .deb's...

http://allmydata.org/buildbot-pycryptopp/builders/BlackDew%20debian-unstable-i386

Black Dew's debian-unstable buildslave is now trying to upload .deb's... <http://allmydata.org/buildbot-pycryptopp/builders/BlackDew%20debian-unstable-i386>

It worked! Now we need someone with Debian to test whether sudo apt-get install allmydata-tahoe works.

It worked! Now we need someone with Debian to test whether `sudo apt-get install allmydata-tahoe` works.

Okay, and thanks to Arthur Lutz we now have a second Debian buildslave uploading .deb's, this one is lenny:

http://allmydata.org/buildbot-pycryptopp/builders/Arthur%20debian-lenny-c7-i386/builds/22/steps/upload%20deb/logs/stdio

Now we just need someone with Debian-or-Ubuntu/x86 to test whether the pycryptopp .deb's in the allmydata.org apt repo work for them (and whether therefore the allmydata-tahoe .deb's also work).

Okay, and thanks to Arthur Lutz we now have a second Debian buildslave uploading .deb's, this one is lenny: <http://allmydata.org/buildbot-pycryptopp/builders/Arthur%20debian-lenny-c7-i386/builds/22/steps/upload%20deb/logs/stdio> Now we just need someone with Debian-or-Ubuntu/x86 to test whether the `pycryptopp` .deb's in the allmydata.org apt repo work for them (and whether therefore the `allmydata-tahoe` .deb's also work).
arthur commented 2009-07-28 14:23:09 +00:00
Owner

This is what I get when doing a apt-get update :

Err http://allmydata.org lenny/main Packages
404 Not Found
Err http://allmydata.org lenny/tahoe Packages
404 Not Found
Err http://allmydata.org lenny/main Sources
404 Not Found
Err http://allmydata.org lenny/tahoe Sources
404 Not Found
W: Failed to fetch http://allmydata.org/debian/dists/lenny/main/binary-i386/Packages 404 Not Found

W: Failed to fetch http://allmydata.org/debian/dists/lenny/tahoe/binary-i386/Packages 404 Not Found

W: Failed to fetch http://allmydata.org/debian/dists/lenny/main/source/Sources 404 Not Found

W: Failed to fetch http://allmydata.org/debian/dists/lenny/tahoe/source/Sources 404 Not Found

E: Some index files failed to download, they have been ignored, or old ones used instead.

This is what I get when doing a apt-get update : Err <http://allmydata.org> lenny/main Packages 404 Not Found Err <http://allmydata.org> lenny/tahoe Packages 404 Not Found Err <http://allmydata.org> lenny/main Sources 404 Not Found Err <http://allmydata.org> lenny/tahoe Sources 404 Not Found W: Failed to fetch <http://allmydata.org/debian/dists/lenny/main/binary-i386/Packages> 404 Not Found W: Failed to fetch <http://allmydata.org/debian/dists/lenny/tahoe/binary-i386/Packages> 404 Not Found W: Failed to fetch <http://allmydata.org/debian/dists/lenny/main/source/Sources> 404 Not Found W: Failed to fetch <http://allmydata.org/debian/dists/lenny/tahoe/source/Sources> 404 Not Found E: Some index files failed to download, they have been ignored, or old ones used instead.
terrell commented 2009-07-28 14:42:34 +00:00
Owner

in the browser - i confirm

404 for http://allmydata.org/debian/dists/lenny/main/binary-i386/Packages

but...

200 for http://allmydata.org/debian/dists/lenny/main/binary-i386/
listing
python-pycryptopp_0.5.16-1_i386.deb

so something is wonky with the paths - looking for Packages and Sources that don't exist

in the browser - i confirm 404 for <http://allmydata.org/debian/dists/lenny/main/binary-i386/Packages> but... 200 for <http://allmydata.org/debian/dists/lenny/main/binary-i386/> listing python-pycryptopp_0.5.16-1_i386.deb so something is wonky with the paths - looking for Packages and Sources that don't exist
Author

I'm in an airport now, but when I'm back at home tomorrow I'll try to fix up the apt repo update scripts to include the new releases: lenny, jaunty, etc. (I had set up the flappserver receivers ahead of time, for more releases than we had APT automation to handle).

I'm a bit worried about the version numbers on the pycryptopp debs: it looks like they're being created with "0.5.16-1", with no "r1234"-type suffix. Is that right? The APT repository (and the debian clients which pull from it) will get confused if two different binaries are uploaded with the same version number, such as if 0.5.16-r1234 and 0.5.16-r1235 are both uploaded under the name "0.5.16-1". Is stdeb using the output of darcsver?

I'm in an airport now, but when I'm back at home tomorrow I'll try to fix up the apt repo update scripts to include the new releases: lenny, jaunty, etc. (I had set up the flappserver receivers ahead of time, for more releases than we had APT automation to handle). I'm a bit worried about the version numbers on the pycryptopp debs: it looks like they're being created with "0.5.16-1", with no "r1234"-type suffix. Is that right? The APT repository (and the debian clients which pull from it) will get confused if two different binaries are uploaded with the same version number, such as if 0.5.16-r1234 and 0.5.16-r1235 are both uploaded under the name "0.5.16-1". Is stdeb using the output of darcsver?

Hm, no I get .deb files with the full version number including -r:

-rw-r--r--  1 zooko zooko 449358 2009-07-29 07:55 ./python-pycryptopp_0.5.16-r667-1_amd64.deb

Oh, except for the "release" versions, which contain no new patches since the most recent release tag. Those ones leave out the -r$COUNT but keep the -1 that stdeb seems to want to append:

-rw-r--r--  1 zooko zooko  449348 2009-07-29 08:03 ./python-pycryptopp_0.5.16-1_amd64.deb

Good enough?

Hm, no I get .deb files with the full version number including `-r`: ``` -rw-r--r-- 1 zooko zooko 449358 2009-07-29 07:55 ./python-pycryptopp_0.5.16-r667-1_amd64.deb ``` Oh, except for the "release" versions, which contain no new patches since the most recent release tag. Those ones leave out the `-r$COUNT` but keep the `-1` that stdeb seems to want to append: ``` -rw-r--r-- 1 zooko zooko 449348 2009-07-29 08:03 ./python-pycryptopp_0.5.16-1_amd64.deb ``` Good enough?
Author

yeah, that sounds great. thanks!

I failed to get the APT repo updated today. I'll be back online next tuesday or wednesday, and will try to get it done then. Note to self: look into using "mini-dinstall" instead of apt-ftparchive. Also, POX says to look at http://upsilon.cc/~zack/blog/posts/2009/04/howto:_uploading_to_people.d.o_using_dput/ .

yeah, that sounds great. thanks! I failed to get the APT repo updated today. I'll be back online next tuesday or wednesday, and will try to get it done then. Note to self: look into using "mini-dinstall" instead of apt-ftparchive. Also, POX says to look at <http://upsilon.cc/~zack/blog/posts/2009/04/howto:_uploading_to_people.d.o_using_dput/> .

I think we should announce Tahoe-LAFS v1.5.0 Saturday even if we don't have this ticket (apt-get install allmydata-tahoe) isn't fixed and even if #773 (document installation on Windows) isn't fixed. I highly value those tickets, but this release is just taking way too long and I want to get it over with and move on! We can fix these tickets in the days following the v1.5.0 release.

Moving this from "v1.5.0" Milestone to "v1.5.1".

I think we should announce Tahoe-LAFS v1.5.0 Saturday even if we don't have this ticket (`apt-get install allmydata-tahoe`) isn't fixed and even if #773 (document installation on Windows) isn't fixed. I highly value those tickets, but this release is just taking way too long and I want to get it over with and move on! We can fix these tickets in the days following the v1.5.0 release. Moving this from "v1.5.0" Milestone to "v1.5.1".

Sebastian Kuzminsky might offer some help setting up apt repos, in which case I might have a whack at it myself today.

Sebastian Kuzminsky might offer some help setting up apt repos, in which case I might have a whack at it myself today.

I hooked up a buildslave for pycryptopp on the Ubuntu Hardy server operated by allmydata.com:

http://allmydata.org/buildbot-pycryptopp/builders/allmydata.com-hardy-i386%20syslib/builds/8

Arthur is already running a lenny buildslave on i386 which is uploading .deb's for pycryptopp:

http://allmydata.org/buildbot-pycryptopp/builders/Arthur%20debian-lenny-c7-i386/builds/22

I hope that these .deb's will make apt-get install allmydata-tahoe work for most current Debian users, in which case we can remove the note at the top of http://allmydata.org/trac/tahoe/wiki/DownloadDebianPackages that sadly says apt-get is not currently working.

I hooked up a buildslave for pycryptopp on the Ubuntu Hardy server operated by allmydata.com: <http://allmydata.org/buildbot-pycryptopp/builders/allmydata.com-hardy-i386%20syslib/builds/8> Arthur is already running a lenny buildslave on i386 which is uploading .deb's for pycryptopp: <http://allmydata.org/buildbot-pycryptopp/builders/Arthur%20debian-lenny-c7-i386/builds/22> I hope that these .deb's will make `apt-get install allmydata-tahoe` work for most current Debian users, in which case we can remove the note at the top of <http://allmydata.org/trac/tahoe/wiki/DownloadDebianPackages> that sadly says `apt-get` is not currently working.

My friend Seb Kuzminsky uses a tool named "dpkg-scanpackages", I think it was called, to generate apt repo indexes from a set of .deb's.

(He also told me about a tool called "pbuilder" (??) which creates a chroot and installs only the debian packages that you specified in your build-depends. That way you find out if you forgot to include a dep in your build-depends.)

My friend Seb Kuzminsky uses a tool named "dpkg-scanpackages", I think it was called, to generate apt repo indexes from a set of .deb's. (He also told me about a tool called "pbuilder" (??) which creates a chroot and installs only the debian packages that you specified in your build-depends. That way you find out if you forgot to include a dep in your build-depends.)

Brian posted an update on this issue to the list:

http://allmydata.org/pipermail/tahoe-dev/2009-August/002547.html

Brian posted an update on this issue to the list: <http://allmydata.org/pipermail/tahoe-dev/2009-August/002547.html>

#644 was a duplicate of this.

#644 was a duplicate of this.

exarkun might be able to set up a grid once this ticket is fixed.

exarkun might be able to set up a grid once this ticket is fixed.

I wonder if exarkun gets mail when I comment on this ticket.

I wonder if exarkun gets mail when I comment on this ticket.

Signs point to yes.

:)

Signs point to yes. :)

We need a hardy buildslave to move forward on this:

http://allmydata.org/buildbot-pycryptopp/waterfall

We need a hardy buildslave to move forward on this: <http://allmydata.org/buildbot-pycryptopp/waterfall>
zooko added this to the eventually milestone 2009-10-26 20:40:51 +00:00
zooko changed title from debian 'sid' package fails to run: pycryptopp-0.5.15 not available in debian form to need .deb's of pycryptopp and zfec 2009-10-26 21:12:49 +00:00

And we need a sid buildslave too. We already have a lenny one thanks to soultcer. We have a hardy-amd64 buildslave for Tahoe-LAFS thanks to Zandr Milewski, so maybe we could get him to set up buildslaves on that machine for pycryptopp and zfec.

And we need a sid buildslave too. We already have a lenny one thanks to soultcer. We have a hardy-amd64 buildslave for Tahoe-LAFS thanks to Zandr Milewski, so maybe we could get him to set up buildslaves on that machine for pycryptopp and zfec.
Author

The testgrid introducer is offline: it is hosted on a gutsy box, and was accidentally rebooted yesterday. Tahoe is not currently installed on it, because the current allmydata-tahoe .deb cannot be installed, due to a lack of a gutsy pycryptopp-0.5.15 debian package.

The testgrid introducer is offline: it is hosted on a gutsy box, and was accidentally rebooted yesterday. Tahoe is not currently installed on it, because the current allmydata-tahoe .deb cannot be installed, due to a lack of a gutsy pycryptopp-0.5.15 debian package.

According to wiki/DownloadDebianPackages there are six debian/ubuntu platforms that we currently intend to support (that's how I interpret the column titled priority): lenny i386, sid i386, hardy i386, hardy amd64, karmic i386, and karmic amd64.

Of those six two come with Tahoe-LAFS natively (Karmic both i386 anda amd64).

One is currently marked as fully working: hardy i386 has "apt-get installable: yes", "deb buildable: yes", "tahoe deb available: yes", "support debs available: yes", "runs-from-source: yes.

The other three all have missing pieces: sid i386 doesn't have "tahoe deb available", hardy amd64 doesn't have "support debs available" or "apt-get installable" and lenny i386 has a bunch of question marks and parentheticals that I don't understand.

Now the process of producing and hosting .deb's for zfec and pycryptopp is very automated. To set it up you:

  1. Set up a buildslave on the platform for which you wish to produce .deb's.
  2. Turn on the build_deb=True argument to the make_factory() for that builder in the master.cfg file.
  3. Turn on the upload_deb=True argument.
  4. Give that buildslave a flappclient furl in $WORKINGDIR/../../main-deb.furl which allows it to upload the resulting .deb to the right place.
  5. Give that buildslave a flappclient furl in $WORKINGDIR/../../tahoe-update-apt.furl which allows it to trigger the "rebuild our apt repository" service.

I am motivated to do these steps for hardy-amd64, but pycryptopp doesn't have a hardy-amd64 buildslave. I guess we're waiting on someone to contribute a hardy amd64 buildslave and contribute their time installing the flappclient furls.

I'm personally not motivated to do these steps for other platforms at this time. In particular I don't think we should support platforms that no longer get security fixes from their operating system provider, such as Ubuntu Gutsy, because there's little point in running a secure Tahoe-LAFS node on an insecure operating system.

But I guess if someone else goes to all the effort to maintain the buildslave for any platform then I'll go to the effort of configuring it in the master.cfg and creating the flappfurls.

According to [wiki/DownloadDebianPackages](wiki/DownloadDebianPackages) there are six debian/ubuntu platforms that we currently intend to support (that's how I interpret the column titled `priority`): lenny i386, sid i386, hardy i386, hardy amd64, karmic i386, and karmic amd64. Of those six two come with Tahoe-LAFS natively (Karmic both i386 anda amd64). One is currently marked as fully working: hardy i386 has "apt-get installable: yes", "deb buildable: yes", "tahoe deb available: yes", "support debs available: yes", "runs-from-source: yes. The other three all have missing pieces: sid i386 doesn't have "tahoe deb available", hardy amd64 doesn't have "support debs available" or "apt-get installable" and lenny i386 has a bunch of question marks and parentheticals that I don't understand. Now the process of producing and hosting .deb's for zfec and pycryptopp is very automated. To set it up you: 1. Set up a buildslave on the platform for which you wish to produce .deb's. 2. Turn on the `build_deb=True` argument to the `make_factory()` for that builder in the `master.cfg` file. 3. Turn on the `upload_deb=True` argument. 4. Give that buildslave a flappclient furl in `$WORKINGDIR/../../main-deb.furl` which allows it to upload the resulting .deb to the right place. 5. Give that buildslave a flappclient furl in `$WORKINGDIR/../../tahoe-update-apt.furl` which allows it to trigger the "rebuild our apt repository" service. I am motivated to do these steps for hardy-amd64, but pycryptopp doesn't have a hardy-amd64 buildslave. I guess we're waiting on someone to contribute a hardy amd64 buildslave and contribute their time installing the flappclient furls. I'm personally not motivated to do these steps for other platforms at this time. In particular I don't think we should support platforms that no longer get security fixes from their operating system provider, such as Ubuntu Gutsy, because there's little point in running a secure Tahoe-LAFS node on an insecure operating system. But I guess if someone else goes to all the effort to maintain the buildslave for any platform then I'll go to the effort of configuring it in the `master.cfg` and creating the flappfurls.

Brian reports that the pycryptopp .deb for Hardy-i386 that we currently host is built wrong:

<warner> the 0.5.16-r669 deb declares a dependency on python-central >=0.6.7

And python-central 0.6.7 is too new for Hardy, apparently. I just realized that pycryptopp doesn't have a buildslave for Hardy-i386 any more than it has one for Hardy-amd64, so I'll change the wiki/DownloadDebianPackages to show that we don't have .deb's of pycryptopp for Hardy-i386 and I'll eventually post to tahoe-dev asking for someone to contribute a hardy-i386 and hardy-amd64 buildslave for pycryptopp.

Brian reports that the pycryptopp .deb for Hardy-i386 that we currently host is built wrong: ``` <warner> the 0.5.16-r669 deb declares a dependency on python-central >=0.6.7 ``` And python-central 0.6.7 is too new for Hardy, apparently. I just realized that pycryptopp doesn't have a buildslave for Hardy-i386 any more than it has one for Hardy-amd64, so I'll change the [wiki/DownloadDebianPackages](wiki/DownloadDebianPackages) to show that we don't have .deb's of pycryptopp for Hardy-i386 and I'll eventually post to tahoe-dev asking for someone to contribute a hardy-i386 and hardy-amd64 buildslave for pycryptopp.

Brian asked me to clarify that the fact that Tahoe-LAFS ships with Karmic doesn't mean that we provide .deb's of newer versions of Tahoe-LAFS for Karmic. I'll see if I can update wiki/DownloadDebianPackages to show that what you get in Karmic is only Tahoe-LAFS v1.5.0.

Brian asked me to clarify that the fact that Tahoe-LAFS ships with Karmic doesn't mean that we provide .deb's of newer versions of Tahoe-LAFS for Karmic. I'll see if I can update [wiki/DownloadDebianPackages](wiki/DownloadDebianPackages) to show that what you get in Karmic is only Tahoe-LAFS v1.5.0.
Author

FYI, we have hardware for a hardy-i386 buildslave (a machine known as "deharo1"), but it's currently offline. I'll try to schedule some colo time next weekend to bring it back online. We also have a second machine which could probably be rebuilt to be a karmic buildslave.

FYI, we have hardware for a hardy-i386 buildslave (a machine known as "deharo1"), but it's currently offline. I'll try to schedule some colo time next weekend to bring it back online. We also have a second machine which could probably be rebuilt to be a karmic buildslave.

#422 was a duplicate of this.

#422 was a duplicate of this.
zooko changed title from need .deb's of pycryptopp and zfec to have buildslaves automatically build debian packages of foolscap, zfec, pycryptopp, pyutil, argparse, zbase32 2009-12-12 04:21:42 +00:00

#498 was a duplicate of this.

#498 was a duplicate of this.

The remaining part of #978 was a duplicate of this. Copying a possibly-relevant comment here:

leif wrote:

Following ioerror's instructions in debian-docs-patch-final.txt (found on ticket #961) I was able to build a Debian lenny package.

The only thing I did differently than ioerror was that I also used stdeb to make debs of the two remaining things he easy_install'd: argparse and zbase32.

To make argparse's setup.py happy, I ran sed -i -e 's/*file*/"."/' setup.py .

The remaining part of #978 was a duplicate of this. Copying a possibly-relevant comment here: leif wrote: > Following ioerror's instructions in debian-docs-patch-final.txt (found on ticket #961) I was able to build a Debian lenny package. > The only thing I did differently than ioerror was that I also used stdeb to make debs of the two remaining things he easy_install'd: argparse and zbase32. > To make argparse's setup.py happy, I ran `sed -i -e 's/*file*/"."/' setup.py` .

Is this still relevant given that there are official Debian packages of our dependencies now?

Is this still relevant given that there are official Debian packages of our dependencies now?

The deb builders have been decommissioned.

The deb builders have been decommissioned.
daira added the
r/invalid
label 2011-08-26 22:22:07 +00:00
daira closed this issue 2011-08-26 22:22:07 +00:00
Sign in to join this conversation.
No labels
c/code
c/code-dirnodes
c/code-encoding
c/code-frontend
c/code-frontend-cli
c/code-frontend-ftp-sftp
c/code-frontend-magic-folder
c/code-frontend-web
c/code-mutable
c/code-network
c/code-nodeadmin
c/code-peerselection
c/code-storage
c/contrib
c/dev-infrastructure
c/docs
c/operational
c/packaging
c/unknown
c/website
kw:2pc
kw:410
kw:9p
kw:ActivePerl
kw:AttributeError
kw:DataUnavailable
kw:DeadReferenceError
kw:DoS
kw:FileZilla
kw:GetLastError
kw:IFinishableConsumer
kw:K
kw:LeastAuthority
kw:Makefile
kw:RIStorageServer
kw:StringIO
kw:UncoordinatedWriteError
kw:about
kw:access
kw:access-control
kw:accessibility
kw:accounting
kw:accounting-crawler
kw:add-only
kw:aes
kw:aesthetics
kw:alias
kw:aliases
kw:aliens
kw:allmydata
kw:amazon
kw:ambient
kw:annotations
kw:anonymity
kw:anonymous
kw:anti-censorship
kw:api_auth_token
kw:appearance
kw:appname
kw:apport
kw:archive
kw:archlinux
kw:argparse
kw:arm
kw:assertion
kw:attachment
kw:auth
kw:authentication
kw:automation
kw:avahi
kw:availability
kw:aws
kw:azure
kw:backend
kw:backoff
kw:backup
kw:backupdb
kw:backward-compatibility
kw:bandwidth
kw:basedir
kw:bayes
kw:bbfreeze
kw:beta
kw:binaries
kw:binutils
kw:bitcoin
kw:bitrot
kw:blacklist
kw:blocker
kw:blocks-cloud-deployment
kw:blocks-cloud-merge
kw:blocks-magic-folder-merge
kw:blocks-merge
kw:blocks-raic
kw:blocks-release
kw:blog
kw:bom
kw:bonjour
kw:branch
kw:branding
kw:breadcrumbs
kw:brians-opinion-needed
kw:browser
kw:bsd
kw:build
kw:build-helpers
kw:buildbot
kw:builders
kw:buildslave
kw:buildslaves
kw:cache
kw:cap
kw:capleak
kw:captcha
kw:cast
kw:centos
kw:cffi
kw:chacha
kw:charset
kw:check
kw:checker
kw:chroot
kw:ci
kw:clean
kw:cleanup
kw:cli
kw:cloud
kw:cloud-backend
kw:cmdline
kw:code
kw:code-checks
kw:coding-standards
kw:coding-tools
kw:coding_tools
kw:collection
kw:compatibility
kw:completion
kw:compression
kw:confidentiality
kw:config
kw:configuration
kw:configuration.txt
kw:conflict
kw:connection
kw:connectivity
kw:consistency
kw:content
kw:control
kw:control.furl
kw:convergence
kw:coordination
kw:copyright
kw:corruption
kw:cors
kw:cost
kw:coverage
kw:coveralls
kw:coveralls.io
kw:cpu-watcher
kw:cpyext
kw:crash
kw:crawler
kw:crawlers
kw:create-container
kw:cruft
kw:crypto
kw:cryptography
kw:cryptography-lib
kw:cryptopp
kw:csp
kw:curl
kw:cutoff-date
kw:cycle
kw:cygwin
kw:d3
kw:daemon
kw:darcs
kw:darcsver
kw:database
kw:dataloss
kw:db
kw:dead-code
kw:deb
kw:debian
kw:debug
kw:deep-check
kw:defaults
kw:deferred
kw:delete
kw:deletion
kw:denial-of-service
kw:dependency
kw:deployment
kw:deprecation
kw:desert-island
kw:desert-island-build
kw:design
kw:design-review-needed
kw:detection
kw:dev-infrastructure
kw:devpay
kw:directory
kw:directory-page
kw:dirnode
kw:dirnodes
kw:disconnect
kw:discovery
kw:disk
kw:disk-backend
kw:distribute
kw:distutils
kw:dns
kw:do_http
kw:doc-needed
kw:docker
kw:docs
kw:docs-needed
kw:dokan
kw:dos
kw:download
kw:downloader
kw:dragonfly
kw:drop-upload
kw:duplicity
kw:dusty
kw:earth-dragon
kw:easy
kw:ec2
kw:ecdsa
kw:ed25519
kw:egg-needed
kw:eggs
kw:eliot
kw:email
kw:empty
kw:encoding
kw:endpoint
kw:enterprise
kw:enum34
kw:environment
kw:erasure
kw:erasure-coding
kw:error
kw:escaping
kw:etag
kw:etch
kw:evangelism
kw:eventual
kw:example
kw:excess-authority
kw:exec
kw:exocet
kw:expiration
kw:extensibility
kw:extension
kw:failure
kw:fedora
kw:ffp
kw:fhs
kw:figleaf
kw:file
kw:file-descriptor
kw:filename
kw:filesystem
kw:fileutil
kw:fips
kw:firewall
kw:first
kw:floatingpoint
kw:flog
kw:foolscap
kw:forward-compatibility
kw:forward-secrecy
kw:forwarding
kw:free
kw:freebsd
kw:frontend
kw:fsevents
kw:ftp
kw:ftpd
kw:full
kw:furl
kw:fuse
kw:garbage
kw:garbage-collection
kw:gateway
kw:gatherer
kw:gc
kw:gcc
kw:gentoo
kw:get
kw:git
kw:git-annex
kw:github
kw:glacier
kw:globalcaps
kw:glossary
kw:google-cloud-storage
kw:google-drive-backend
kw:gossip
kw:governance
kw:grid
kw:grid-manager
kw:gridid
kw:gridsync
kw:grsec
kw:gsoc
kw:gvfs
kw:hackfest
kw:hacktahoe
kw:hang
kw:hardlink
kw:heartbleed
kw:heisenbug
kw:help
kw:helper
kw:hint
kw:hooks
kw:how
kw:how-to
kw:howto
kw:hp
kw:hp-cloud
kw:html
kw:http
kw:https
kw:i18n
kw:i2p
kw:i2p-collab
kw:illustration
kw:image
kw:immutable
kw:impressions
kw:incentives
kw:incident
kw:init
kw:inlineCallbacks
kw:inotify
kw:install
kw:installer
kw:integration
kw:integration-test
kw:integrity
kw:interactive
kw:interface
kw:interfaces
kw:interoperability
kw:interstellar-exploration
kw:introducer
kw:introduction
kw:iphone
kw:ipkg
kw:iputil
kw:ipv6
kw:irc
kw:jail
kw:javascript
kw:joke
kw:jquery
kw:json
kw:jsui
kw:junk
kw:key-value-store
kw:kfreebsd
kw:known-issue
kw:konqueror
kw:kpreid
kw:kvm
kw:l10n
kw:lae
kw:large
kw:latency
kw:leak
kw:leasedb
kw:leases
kw:libgmp
kw:license
kw:licenss
kw:linecount
kw:link
kw:linux
kw:lit
kw:localhost
kw:location
kw:locking
kw:logging
kw:logo
kw:loopback
kw:lucid
kw:mac
kw:macintosh
kw:magic-folder
kw:manhole
kw:manifest
kw:manual-test-needed
kw:map
kw:mapupdate
kw:max_space
kw:mdmf
kw:memcheck
kw:memory
kw:memory-leak
kw:mesh
kw:metadata
kw:meter
kw:migration
kw:mime
kw:mingw
kw:minimal
kw:misc
kw:miscapture
kw:mlp
kw:mock
kw:more-info-needed
kw:mountain-lion
kw:move
kw:multi-users
kw:multiple
kw:multiuser-gateway
kw:munin
kw:music
kw:mutability
kw:mutable
kw:mystery
kw:names
kw:naming
kw:nas
kw:navigation
kw:needs-review
kw:needs-spawn
kw:netbsd
kw:network
kw:nevow
kw:new-user
kw:newcaps
kw:news
kw:news-done
kw:news-needed
kw:newsletter
kw:newurls
kw:nfc
kw:nginx
kw:nixos
kw:no-clobber
kw:node
kw:node-url
kw:notification
kw:notifyOnDisconnect
kw:nsa310
kw:nsa320
kw:nsa325
kw:numpy
kw:objects
kw:old
kw:openbsd
kw:openitp-packaging
kw:openssl
kw:openstack
kw:opensuse
kw:operation-helpers
kw:operational
kw:operations
kw:ophandle
kw:ophandles
kw:ops
kw:optimization
kw:optional
kw:options
kw:organization
kw:os
kw:os.abort
kw:ostrom
kw:osx
kw:osxfuse
kw:otf-magic-folder-objective1
kw:otf-magic-folder-objective2
kw:otf-magic-folder-objective3
kw:otf-magic-folder-objective4
kw:otf-magic-folder-objective5
kw:otf-magic-folder-objective6
kw:p2p
kw:packaging
kw:partial
kw:password
kw:path
kw:paths
kw:pause
kw:peer-selection
kw:performance
kw:permalink
kw:permissions
kw:persistence
kw:phone
kw:pickle
kw:pip
kw:pipermail
kw:pkg_resources
kw:placement
kw:planning
kw:policy
kw:port
kw:portability
kw:portal
kw:posthook
kw:pratchett
kw:preformance
kw:preservation
kw:privacy
kw:process
kw:profile
kw:profiling
kw:progress
kw:proxy
kw:publish
kw:pyOpenSSL
kw:pyasn1
kw:pycparser
kw:pycrypto
kw:pycrypto-lib
kw:pycryptopp
kw:pyfilesystem
kw:pyflakes
kw:pylint
kw:pypi
kw:pypy
kw:pysqlite
kw:python
kw:python3
kw:pythonpath
kw:pyutil
kw:pywin32
kw:quickstart
kw:quiet
kw:quotas
kw:quoting
kw:raic
kw:rainhill
kw:random
kw:random-access
kw:range
kw:raspberry-pi
kw:reactor
kw:readonly
kw:rebalancing
kw:recovery
kw:recursive
kw:redhat
kw:redirect
kw:redressing
kw:refactor
kw:referer
kw:referrer
kw:regression
kw:rekey
kw:relay
kw:release
kw:release-blocker
kw:reliability
kw:relnotes
kw:remote
kw:removable
kw:removable-disk
kw:rename
kw:renew
kw:repair
kw:replace
kw:report
kw:repository
kw:research
kw:reserved_space
kw:response-needed
kw:response-time
kw:restore
kw:retrieve
kw:retry
kw:review
kw:review-needed
kw:reviewed
kw:revocation
kw:roadmap
kw:rollback
kw:rpm
kw:rsa
kw:rss
kw:rst
kw:rsync
kw:rusty
kw:s3
kw:s3-backend
kw:s3-frontend
kw:s4
kw:same-origin
kw:sandbox
kw:scalability
kw:scaling
kw:scheduling
kw:schema
kw:scheme
kw:scp
kw:scripts
kw:sdist
kw:sdmf
kw:security
kw:self-contained
kw:server
kw:servermap
kw:servers-of-happiness
kw:service
kw:setup
kw:setup.py
kw:setup_requires
kw:setuptools
kw:setuptools_darcs
kw:sftp
kw:shared
kw:shareset
kw:shell
kw:signals
kw:simultaneous
kw:six
kw:size
kw:slackware
kw:slashes
kw:smb
kw:sneakernet
kw:snowleopard
kw:socket
kw:solaris
kw:space
kw:space-efficiency
kw:spam
kw:spec
kw:speed
kw:sqlite
kw:ssh
kw:ssh-keygen
kw:sshfs
kw:ssl
kw:stability
kw:standards
kw:start
kw:startup
kw:static
kw:static-analysis
kw:statistics
kw:stats
kw:stats_gatherer
kw:status
kw:stdeb
kw:storage
kw:streaming
kw:strports
kw:style
kw:stylesheet
kw:subprocess
kw:sumo
kw:survey
kw:svg
kw:symlink
kw:synchronous
kw:tac
kw:tahoe-*
kw:tahoe-add-alias
kw:tahoe-admin
kw:tahoe-archive
kw:tahoe-backup
kw:tahoe-check
kw:tahoe-cp
kw:tahoe-create-alias
kw:tahoe-create-introducer
kw:tahoe-debug
kw:tahoe-deep-check
kw:tahoe-deepcheck
kw:tahoe-lafs-trac-stream
kw:tahoe-list-aliases
kw:tahoe-ls
kw:tahoe-magic-folder
kw:tahoe-manifest
kw:tahoe-mkdir
kw:tahoe-mount
kw:tahoe-mv
kw:tahoe-put
kw:tahoe-restart
kw:tahoe-rm
kw:tahoe-run
kw:tahoe-start
kw:tahoe-stats
kw:tahoe-unlink
kw:tahoe-webopen
kw:tahoe.css
kw:tahoe_files
kw:tahoewapi
kw:tarball
kw:tarballs
kw:tempfile
kw:templates
kw:terminology
kw:test
kw:test-and-set
kw:test-from-egg
kw:test-needed
kw:testgrid
kw:testing
kw:tests
kw:throttling
kw:ticket999-s3-backend
kw:tiddly
kw:time
kw:timeout
kw:timing
kw:to
kw:to-be-closed-on-2011-08-01
kw:tor
kw:tor-protocol
kw:torsocks
kw:tox
kw:trac
kw:transparency
kw:travis
kw:travis-ci
kw:trial
kw:trickle
kw:trivial
kw:truckee
kw:tub
kw:tub.location
kw:twine
kw:twistd
kw:twistd.log
kw:twisted
kw:twisted-14
kw:twisted-trial
kw:twitter
kw:twn
kw:txaws
kw:type
kw:typeerror
kw:ubuntu
kw:ucwe
kw:ueb
kw:ui
kw:unclean
kw:uncoordinated-writes
kw:undeletable
kw:unfinished-business
kw:unhandled-error
kw:unhappy
kw:unicode
kw:unit
kw:unix
kw:unlink
kw:update
kw:upgrade
kw:upload
kw:upload-helper
kw:uri
kw:url
kw:usability
kw:use-case
kw:utf-8
kw:util
kw:uwsgi
kw:ux
kw:validation
kw:variables
kw:vdrive
kw:verify
kw:verlib
kw:version
kw:versioning
kw:versions
kw:video
kw:virtualbox
kw:virtualenv
kw:vista
kw:visualization
kw:visualizer
kw:vm
kw:volunteergrid2
kw:volunteers
kw:vpn
kw:wapi
kw:warners-opinion-needed
kw:warning
kw:weapi
kw:web
kw:web.port
kw:webapi
kw:webdav
kw:webdrive
kw:webport
kw:websec
kw:website
kw:websocket
kw:welcome
kw:welcome-page
kw:welcomepage
kw:wiki
kw:win32
kw:win64
kw:windows
kw:windows-related
kw:winscp
kw:workaround
kw:world-domination
kw:wrapper
kw:write-enabler
kw:wui
kw:x86
kw:x86-64
kw:xhtml
kw:xml
kw:xss
kw:zbase32
kw:zetuptoolz
kw:zfec
kw:zookos-opinion-needed
kw:zope
kw:zope.interface
p/blocker
p/critical
p/major
p/minor
p/normal
p/supercritical
p/trivial
r/cannot reproduce
r/duplicate
r/fixed
r/invalid
r/somebody else's problem
r/was already fixed
r/wontfix
r/worksforme
t/defect
t/enhancement
t/task
v/0.2.0
v/0.3.0
v/0.4.0
v/0.5.0
v/0.5.1
v/0.6.0
v/0.6.1
v/0.7.0
v/0.8.0
v/0.9.0
v/1.0.0
v/1.1.0
v/1.10.0
v/1.10.1
v/1.10.2
v/1.10a2
v/1.11.0
v/1.12.0
v/1.12.1
v/1.13.0
v/1.14.0
v/1.15.0
v/1.15.1
v/1.2.0
v/1.3.0
v/1.4.1
v/1.5.0
v/1.6.0
v/1.6.1
v/1.7.0
v/1.7.1
v/1.7β
v/1.8.0
v/1.8.1
v/1.8.2
v/1.8.3
v/1.8β
v/1.9.0
v/1.9.0-s3branch
v/1.9.0a1
v/1.9.0a2
v/1.9.0b1
v/1.9.1
v/1.9.2
v/1.9.2a1
v/cloud-branch
v/unknown
No milestone
No project
No assignees
5 participants
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: tahoe-lafs/trac#769
No description provided.