'readonly_storage' and 'reserved_space' not honored for mutable-slot write requests #390

Open
opened 2008-04-22 17:39:47 +00:00 by warner · 17 comments
warner commented 2008-04-22 17:39:47 +00:00
Owner

The remote_allocate_buckets call correctly says "no" when the readonly_storage config flag is on, but the corresponding remote_slot_testv_and_readv_and_writev (for mutable files) does not. This means that a storage server which has been kicked into readonly mode (say, if the drive is starting to fail and it has been left online just to get the shares off of that drive and on to a new one) will continue to accumulate new mutable shares.

The `remote_allocate_buckets` call correctly says "no" when the `readonly_storage` config flag is on, but the corresponding `remote_slot_testv_and_readv_and_writev` (for mutable files) does not. This means that a storage server which has been kicked into readonly mode (say, if the drive is starting to fail and it has been left online just to get the shares off of that drive and on to a new one) will continue to accumulate new mutable shares.
tahoe-lafs added the
code-storage
major
defect
1.0.0
labels 2008-04-22 17:39:47 +00:00
tahoe-lafs added this to the eventually milestone 2008-04-22 17:39:47 +00:00
zooko commented 2008-05-30 04:14:07 +00:00
Author
Owner

Practically speaking, shouldn't read-only storage normally be implemented by remounting the storage partition read-only?

This implies that one should normally not keep anything else (like Tahoe log files) on the partition where one keeps Tahoe storage.

Practically speaking, shouldn't read-only storage normally be implemented by remounting the storage partition read-only? This implies that one should normally not keep anything else (like Tahoe log files) on the partition where one keeps Tahoe storage.
zooko commented 2008-05-30 04:16:02 +00:00
Author
Owner

Oh, but now I realize that making it read-only at that level might not propagate back to the client when the client calls remote_allocate_buckets or remote_slot_tesv_and_readv_and_writev. Or, actually, it might! Because...

Oh, but now I realize that making it read-only at that level might not propagate back to the client when the client calls `remote_allocate_buckets` or `remote_slot_tesv_and_readv_and_writev`. Or, actually, it might! Because...
zooko commented 2008-05-30 04:19:53 +00:00
Author
Owner

... because [remote_allocate_buckets()]source:src/allmydata/storage@2537#L744 and [{remote_slot_testv_and_readv_and_writev()}]source:src/allmydata/storage@2537#L931 both try to write to the filesystem before they return, so if that filesystem is read-only, then a nice foolscap exception will be sent back to the client.

... because [`remote_allocate_buckets()`]source:src/allmydata/storage@2537#L744 and [{remote_slot_testv_and_readv_and_writev()}]source:src/allmydata/storage@2537#L931 both try to write to the filesystem before they return, so if that filesystem is read-only, then a nice foolscap exception will be sent back to the client.
zooko commented 2008-05-30 04:22:01 +00:00
Author
Owner

Those hyperlinks should be [remote_allocate_buckets()]source:src/allmydata/storage.py@2537#L744 and [remote_slot_testv_and_readv_and_writev()]source:src/allmydata/storage.py@2537#L931.

Those hyperlinks should be [remote_allocate_buckets()]source:src/allmydata/storage.py@2537#L744 and [remote_slot_testv_and_readv_and_writev()]source:src/allmydata/storage.py@2537#L931.
zooko commented 2008-05-30 04:24:20 +00:00
Author
Owner

So, I would like us to consider removing the "read only storage" feature from the Tahoe source code. People who can't make their whole whole partition read-only can use simple filesystem permissions to make the storage directory unwriteable to the account that runs the Tahoe node. This technique would be less buggy that the implementation of read-only in the Tahoe source code, and it would require less of our developer time to maintain.

So, I would like us to consider removing the "read only storage" feature from the Tahoe source code. People who can't make their whole whole partition read-only can use simple filesystem permissions to make the storage directory unwriteable to the account that runs the Tahoe node. This technique would be less buggy that the implementation of read-only in the Tahoe source code, and it would require less of our developer time to maintain.
tahoe-lafs changed title from 'readonly_storage' not honored for mutable-slot write requests to 'readonly_storage' not honored for mutable-slot write requests (or shall we stop offering read-only storage as a Tahoe configuration option) 2008-05-30 04:24:20 +00:00
tahoe-lafs modified the milestone from eventually to undecided 2008-06-01 21:08:19 +00:00
zooko commented 2008-06-02 20:44:58 +00:00
Author
Owner

Brian and I had a big conversation on the phone about this and came up with a good design -- efficient, robust, and not too complicated. Brian wrote it up:

http://allmydata.org/pipermail/tahoe-dev/2008-May/000630.html

Brian and I had a big conversation on the phone about this and came up with a good design -- efficient, robust, and not too complicated. Brian wrote it up: <http://allmydata.org/pipermail/tahoe-dev/2008-May/000630.html>
zooko commented 2008-06-06 23:31:53 +00:00
Author
Owner

Hm... why did you put this one in "undecided"? How about v1.2.0...

Hm... why did you put this one in "undecided"? How about v1.2.0...
tahoe-lafs modified the milestone from undecided to 1.2.0 2008-06-06 23:31:53 +00:00
tahoe-lafs changed title from 'readonly_storage' not honored for mutable-slot write requests (or shall we stop offering read-only storage as a Tahoe configuration option) to 'readonly_storage' not honored for mutable-slot write requests 2008-06-06 23:31:53 +00:00
warner commented 2008-06-07 01:20:41 +00:00
Author
Owner

because I figured that we'd replace it with something other than "readonly_storage", and that the accounting / dict-introducer changes might significantly change what we do with this. It's an issue that we really ought to address for 1.2.0, but I don't know how exactly we're going to do that.

1.2.0 sounds fine.

because I figured that we'd replace it with something other than "readonly_storage", and that the accounting / dict-introducer changes might significantly change what we do with this. It's an issue that we really ought to address for 1.2.0, but I don't know how exactly we're going to do that. 1.2.0 sounds fine.
tahoe-lafs modified the milestone from 1.5.0 to eventually 2009-06-30 12:39:50 +00:00
davidsarah commented 2010-01-15 20:21:43 +00:00
Author
Owner

As long as we have the reserved_space setting, that should also be honoured for writes to mutable slots, so an explicit space check is needed just as in remote_allocate_buckets.

As long as we have the `reserved_space` setting, that should also be honoured for writes to mutable slots, so an explicit space check is needed just as in `remote_allocate_buckets`.
tahoe-lafs changed title from 'readonly_storage' not honored for mutable-slot write requests to 'readonly_storage' and 'reserved_space' not honored for mutable-slot write requests 2010-01-16 00:47:35 +00:00
davidsarah commented 2010-01-16 00:48:49 +00:00
Author
Owner

Required for #871 (handle out-of-disk-space condition).

Required for #871 (handle out-of-disk-space condition).
tahoe-lafs modified the milestone from eventually to 1.7.0 2010-02-01 20:01:38 +00:00
tahoe-lafs modified the milestone from 1.7.0 to 1.6.1 2010-02-15 19:53:30 +00:00
zooko commented 2010-02-16 05:15:44 +00:00
Author
Owner

The problem is that if you run out of space in your storage server, and you refuse to overwrite a mutable share with a new version, then you are going to continue to serve the older version, which could cause inefficiency, confusion, and perhaps even "rollback" events. Kicking this one out of v1.6.1 on the grounds that it is Feb. 15 and I don't understand what we should do, so it is too late to do something about it for the planned Feb. 20 release of v1.6.1. (Also we have lots of other clearer issues in the v1.6.1 Milestone already.)

The problem is that if you run out of space in your storage server, and you refuse to overwrite a mutable share with a new version, then you are going to continue to serve the older version, which could cause inefficiency, confusion, and perhaps even "rollback" events. Kicking this one out of v1.6.1 on the grounds that it is Feb. 15 and I don't understand what we *should* do, so it is too late to do something about it for the planned Feb. 20 release of v1.6.1. (Also we have lots of other clearer issues in the v1.6.1 Milestone already.)
tahoe-lafs modified the milestone from 1.6.1 to eventually 2010-02-16 05:15:44 +00:00
davidsarah commented 2010-12-30 22:28:07 +00:00
Author
Owner

Replying to zooko:

The problem is that if you run out of space in your storage server, and you refuse to overwrite a mutable share with a new version, then you are going to continue to serve the older version, which could cause inefficiency, confusion, and perhaps even "rollback" events.

OTOH, you can't in general avoid these bad things by not honouring reserved_space, because they will happen anyway if the filesystem runs out of space. Perhaps there is a case for starting to refuse storage of immutable shares at a higher reserved-space threshold than for mutable shares, though.

Replying to [zooko](/tahoe-lafs/trac-2024-07-25/issues/390#issuecomment-107707): > The problem is that if you run out of space in your storage server, and you refuse to overwrite a mutable share with a new version, then you are going to continue to serve the older version, which could cause inefficiency, confusion, and perhaps even "rollback" events. OTOH, you can't in general avoid these bad things by not honouring `reserved_space`, because they will happen anyway if the filesystem runs out of space. Perhaps there is a case for starting to refuse storage of immutable shares at a higher reserved-space threshold than for mutable shares, though.
davidsarah commented 2010-12-30 22:31:06 +00:00
Author
Owner

Replying to [davidsarah]comment:19:

OTOH, you can't in general avoid these bad things by not honouring reserved_space, because they will happen anyway if the filesystem runs out of space.

... which, as #871 points out, is currently not handled gracefully.

Replying to [davidsarah]comment:19: > OTOH, you can't in general avoid these bad things by not honouring `reserved_space`, because they will happen anyway if the filesystem runs out of space. ... which, as #871 points out, is currently not handled gracefully.
Author
Owner

Replying to zooko:

The problem is that if you run out of space in your storage server, and you refuse to overwrite a mutable share with a new version, then you are going to continue to serve the older version, which could cause inefficiency, confusion, and perhaps even "rollback" events. Kicking this one out of v1.6.1 on the grounds that it is Feb. 15 and I don't understand what we should do, so it is too late to do something about it for the planned Feb. 20 release of v1.6.1. (Also we have lots of other clearer issues in the v1.6.1 Milestone already.)

Probably I am failing to understand, but on the off chance that's useful: If the notion of taking a server read only and having shares migrate off it (which sounds useful) is going to work, then replacing a mutable file with a new version is going to have to find servers to store the new shares and place them and remove the old shares. So a server failing to accept the new share shouldn't have any direct bearing on the new upload succeeding and the old shares being removed. I would also expect (again, without knowing) that there would be a process of placing the new shares and then only when successful removing the old ones.

Replying to [zooko](/tahoe-lafs/trac-2024-07-25/issues/390#issuecomment-107707): > The problem is that if you run out of space in your storage server, and you refuse to overwrite a mutable share with a new version, then you are going to continue to serve the older version, which could cause inefficiency, confusion, and perhaps even "rollback" events. Kicking this one out of v1.6.1 on the grounds that it is Feb. 15 and I don't understand what we *should* do, so it is too late to do something about it for the planned Feb. 20 release of v1.6.1. (Also we have lots of other clearer issues in the v1.6.1 Milestone already.) Probably I am failing to understand, but on the off chance that's useful: If the notion of taking a server read only and having shares migrate off it (which sounds useful) is going to work, then replacing a mutable file with a new version is going to have to find servers to store the new shares and place them and remove the old shares. So a server failing to accept the new share shouldn't have any direct bearing on the new upload succeeding and the old shares being removed. I would also expect (again, without knowing) that there would be a process of placing the new shares and then only when successful removing the old ones.
davidsarah commented 2011-10-22 03:42:24 +00:00
Author
Owner

See also #1568, for the S3 backend.

See also #1568, for the S3 backend.
zooko commented 2011-10-24 15:28:32 +00:00
Author
Owner

From comment:5:ticket:1568:

For what it is worth, I increasingly think read-only storage should be deprecated for all backends, and people will have to learn how to use their operating system if they want readonliness of storage. When we invented the read-only storage option, I think partly we were thinking of users who could read our docs but didn't want to learn how to use their operating system to set policy. Nowadays I'm less interested in the idea of such users being server operators.

Also, the fact that we've never really finished implementing read-only storage (to include mutables), so that there are weird failure modes that could hit people who rely on it is evidence that we should not spend our precious engineering time on things that the operating system could do for us and do better.

From comment:5:ticket:1568: For what it is worth, I increasingly think read-only storage should be deprecated for all backends, and people will have to learn how to use their operating system if they want readonliness of storage. When we invented the read-only storage option, I think partly we were thinking of users who could read our docs but didn't want to learn how to use their operating system to set policy. Nowadays I'm less interested in the idea of such users being server operators. Also, the fact that we've never really finished implementing read-only storage (to include mutables), so that there are weird failure modes that could hit people who rely on it is evidence that we should not spend our precious engineering time on things that the operating system could do for us and do better.
Author
Owner

Duplicated from (@@http://tahoe-lafs.org/trac/tahoe-lafs/ticket/1568#comment:107695@@)

I don't really follow this. It seems reasonable for a server operator to decide to not accept new shares, and for this to be separate than whether the server process is able to write the filesystem where the shares are kept. For example, it might be reasonable to allow lease renewal, or for other metadata to be updated. It might be that not accepting shares should be similar to zero space available, so increasing the size of a mutable share also might not be allowed. And, if the purpose really is decommissioning, then presumably the mechanism used for repair should somehow signal that the share is present but should be migrated, so that a deep-check --repair can put those shares on some other server.

There's a difference between people that don't understand enough to sysadmin a server, and the server having uniform configuation for server-level behavior. When tahoe is ported to ITS, it should still be possible to tell it to stop taking shares.

Duplicated from (@@http://tahoe-lafs.org/trac/tahoe-lafs/ticket/1568#[comment:107695](/tahoe-lafs/trac-2024-07-25/issues/390#issuecomment-107695)@@) I don't really follow this. It seems reasonable for a server operator to decide to not accept new shares, and for this to be separate than whether the server process is able to write the filesystem where the shares are kept. For example, it might be reasonable to allow lease renewal, or for other metadata to be updated. It might be that not accepting shares should be similar to zero space available, so increasing the size of a mutable share also might not be allowed. And, if the purpose really is decommissioning, then presumably the mechanism used for repair should somehow signal that the share is present but should be migrated, so that a deep-check --repair can put those shares on some other server. There's a difference between people that don't understand enough to sysadmin a server, and the server having uniform configuation for server-level behavior. When tahoe is ported to [ITS](http://en.wikipedia.org/wiki/Incompatible_Timesharing_System), it should still be possible to tell it to stop taking shares.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: tahoe-lafs/trac-2024-07-25#390
No description provided.