[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

Re: stopping webcrawlers using robots.txt

From: Thomas Beale <thomas_at_deepthought.com.au>
Date: 2006-07-10 01:43:43 CEST

Bob Proulx wrote:
> Thomas Beale wrote:
>> I don't think changing the URL is an option - it is published all over
>> the place - everyone in our community knows it.
>
> I understand the pain of it.
>
>> I am quite surprised to find that the configuration which seems to
>> be preferred by the subversion manual is not compatible with
>> managing web robots...
>
> Where does it say this in the subversion manual? I can't find
> anything that recommends that and if any exist I presume it to be a
> documentation but that should be reported. The examples I see all use
> /repos. Also the subversion project itself uses the /repos
> convention.

You are right - I just checked; what I was remembering was the
SVNParentPath approach, which means that authorisation is down one
level, in each named repository, rather than being just off /

But still it seems unfortunate that using http://svn.xxx.yyy/repo_name
isn't more flexible - it's a nice simple URL and easy to remember. And
we have no problems except that I think we possibly need a way to
control robots ....

- thomas

>
> http://svn.collab.net/repos/svn/trunk
>
> Bob

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org
Received on Mon Jul 10 01:45:23 2006

This is an archived mail posted to the Subversion Users mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.