write down my notes on a first plan for accounting and non-generic introduction
[Imported from Trac: page QuotaManagement, version 1]
parent
343df97ac6
commit
9fa0b9a975
47
QuotaManagement.md
Normal file
47
QuotaManagement.md
Normal file
|
@ -0,0 +1,47 @@
|
|||
Here's a basic plan for how to configure "managed introducers". The basic
|
||||
idea is that we have two types of grids: managed and unmanaged. The current
|
||||
code implements "unmanaged" grids: complete free-for-all, anyone who can get
|
||||
to the Introducer can thus get to all the servers, anyone who can get to a
|
||||
server gets to use as much space as they want. In this mode, each client uses
|
||||
their 'introducer.furl' to connect to the the Introducer, which serves two
|
||||
purposes: tell the client about all the servers they can use, and tell all
|
||||
other clients about the server being offered by the new node.
|
||||
|
||||
The "managed introducer" approach is for an environment where you want to be
|
||||
able to keep track of who is using what, and to prevent unmanaged clients
|
||||
from using any storage space.
|
||||
|
||||
In this mode, we have an Account Manager instead of an Introducer. Each
|
||||
client gets a special, distinct facet on this account manager: this gives
|
||||
them control over their account, and allows them to access the storage space
|
||||
enabled by virtue of having that account. This is stored in
|
||||
"my-account.furl", which replaces "introducer.furl" for this purpose.
|
||||
|
||||
In addtion, the servers get an "account-manager.furl" instead of an
|
||||
"introducer.furl". The servers connect to this object and offer themselves as
|
||||
storage servers. The Account Manager remembers a list of all the
|
||||
currently-available storage servers.
|
||||
|
||||
When a client wants more storage servers (perhaps updated periodically, and
|
||||
perhaps using some sort of minimal update protocol (Bloom Filters!)), they
|
||||
contact their Account object and ask for introductions to storage servers.
|
||||
This causes the Account Manager to go to all servers that the client doesn't
|
||||
already know about and tell them "generate a FURL to a facet for the benefit
|
||||
of client 123. Give me that FURL.". The Account Manager then sends the list
|
||||
of new FURLs to the client, who adds them to its peerlist. This peerlist
|
||||
contains tuples of (nodeid, FURL).
|
||||
|
||||
The Storage Server will grant a facet to anyone that the Account Manager
|
||||
tells them to. The Storage Server is really just updating a table that maps
|
||||
from a random number (the FURL's swissnum) to the system-wide small-integer
|
||||
account number. The FURL will dereference to an object that adds an
|
||||
accountNumber=123 to all write() calls, so that they can be stored in leases.
|
||||
|
||||
|
||||
In this approach, the Account Manager is a bottleneck only for the initial
|
||||
contact: the clients all remember their list of Storage Server FURLs for a
|
||||
long time. Clients must contact their Account to take advantage of new
|
||||
servers: the update traffic for this needs to be examined. I can imagine this
|
||||
working reasonably well up to a few hundred servers and say 100k clients if
|
||||
the clients are only asking about new servers once a day (one query per
|
||||
second).
|
Loading…
Reference in a new issue