New introducer

[Imported from Trac: page VolunteerGrid, version 40]
francois 2009-12-11 19:38:27 +00:00
parent ef2236c280
commit 59490eaf0e

@ -1,12 +1,14 @@
# About
The volunteergrid is a tahoe storage grid from users for users. Contrary to the [TestGrid](TestGrid)which may be flushed at any time for development/testing reasonsthe volunteergrid is meant as a stable alternative to the commercial [allmydata.com grid](http://allmydata.com/). It was [proposed by Eugene L.](http://allmydata.org/pipermail/tahoe-dev/2009-February/001248.html) in February 2009 and an introducer [was set up](http://allmydata.org/pipermail/tahoe-dev/2009-February/001341.html) by Zooko after the idea received a fair amount of praise. There are usually between 15 and 20 active server nodes, but new nodes are always welcome.
The volunteergrid is a tahoe storage grid from users for users. Contrary to the [TestGrid](TestGrid)which may be flushed at any time for development/testing reasonsthe volunteergrid is meant as a stable alternative to the commercial [allmydata.com grid](http://allmydata.com/). It was [proposed by Eugene L.](http://allmydata.org/pipermail/tahoe-dev/2009-February/001248.html) in February 2009 and an introducer [was set up](http://allmydata.org/pipermail/tahoe-dev/2009-February/001341.html) by Zooko after the idea received a fair amount of praise. This introducer unfortunately died when nooxie's upgrade went wrong. A [new, more resilient, introducer](http://allmydata.org/pipermail/tahoe-dev/2009-December/003290.html) was implemented by [François Deppierraz]mailto:francois(at)ctrlaltdel.ch on December 2009.
There are usually between 15 and 20 active server nodes, but new nodes are always welcome.
# Setting up a volunteergrid (storage) node
* Follow the [install instructions](http://allmydata.org/source/tahoe/trunk/docs/install.html)
* If your storage server is behind a firewall/NAT, please set up port forwarding to your *tub.port* and point *tub.location* to your external IP address (see [configuration](http://allmydata.org/source/tahoe/trunk/docs/configuration.txt)).
* Set your introducer.furl to *pb://6cypm5tfsv6ag43g3j5z74qx5nfxa2qq@207.7.145.200:64228,nooxie.zooko.com:64228/introducer*
* Set your introducer.furl to *pb://yni7uuz6oc3lxvjdcivm7mzz6v64yibx@volunteergrid.allmydata.org:52627,tahoe.ctrlaltdel.ch:52627,volunteergrid.lothar.com:52627,introducer.volunteergrid.org:52627/introducer*
* Please subscribe to the [volunteergrid-l](http://allmydata.org/cgi-bin/mailman/listinfo/volunteergrid-l) mailing list - it is very low-traffic but might feature important announcements.
* Add your node to the [map](http://webcontent.osm.lab.rfc822.org/tahoe/).
@ -16,13 +18,16 @@ The volunteergrid is a tahoe storage grid from users for users. Contrary to the
# Links
* [Map of servers and people involved with the volunteergrid](http://webcontent.osm.lab.rfc822.org/tahoe/)
* [Introducer web interface](http://nooxie.zooko.com:9797/) provided by Zooko
* [Introducer web interface](http://volunteergrid.allmydata.org:8123/)
* [Public web interface (secorp.net)](http://secorp.net:8123) provided by Peter, the CEO of allmydata.com
* [Public web interface (soultcer.net)](http://tahoe.soultcer.net) provided by David T.
# Servers
## Introducer (nooxie.zooko.com)
nooxie is located in a co-lo in San Francisco. It is an athlon64 server running release "NCP1" of Nexenta GNU/OpenSolaris (<http://nexenta.org> ). It has two identical SATA drives of about 33 GB capacity each in a ZFS RAID-Z mirror, at the time of this writing about 5 GB free. Its uptime is (on 2009-07-11), 72 days, which is when we upgraded it to Nexenta NCP1 final from an early beta release of Nexenta NCP1. uname -a says: `SunOS nooxie 5.11 NexentaOS_20080312 i86pc i386 i86pc Solaris`
## Introducer
The introducer is currently run by [François Deppierraz]mailto:francois(at)ctrlaltdel.ch on *tahoe.ctrlaltdel.ch*, an OpenVZ VE running under Debian lenny.
In the unfortunate case where the current introducer disappear, the following people should be able to bring it back into life: Zooko, Brian Warner and David T. Ask on the mailing or contact them directly.
## Storage Nodes