[svn.haxx.se] · SVN Dev · SVN Users · SVN Org · TSVN Dev · TSVN Users · Subclipse Dev · Subclipse Users · this month's index

RE: [PATCH] Cache open repository per connection

From: Sander Striker <striker_at_apache.org>
Date: 2003-07-03 01:20:28 CEST

> From: Greg Stein [mailto:gstein@lyra.org]
> Sent: Wednesday, July 02, 2003 4:26 AM

> Bugs :-)

'lets :) Perceptions... ;) Okay, okay, it was a hack. Happy now? ;P :)

>> +++ subversion/mod_dav_svn/repos.c (working copy)
>> + /* Get the repository */
>> + base_path = apr_pstrndup(r->pool, r->uri,
>> + ap_find_path_info(r->uri, r->path_info));
> Hmm. I'm not really sure what this is extracting from the URL. A clearer
> comment might be helpful.

heh heh, it extracts the Location part. I came across this piece of
code a while ago in the httpd codebase and it seems about the only
way to get to the Location part, apart from storing it in the dir config
at dir config creation time and retrieving it from there later on.
>> + repos_key = apr_pstrcat(r->pool, "mod_dav_svn:", base_path, root_path);
> Might want to put a ,NULL on the end there. Otherwise, your key is random
> and will never get a cache-hit :-)

Crap! :) You're right. This would explain why I never saw any speedup ;) :)
> In any case, I'd recommend caching on the fs_path instead of the URI.

Nope. See the discussion on list. Greg Hudson was best able to express why
that can be a bad idea. Basically it comes down to being able to map multiple
Locations to the same repository. Come to think of it, root_path will prolly
do just fine... but we might consider blending in the hostname aswell.
>> + repos->repos = (void *)apr_table_get(r->connection->notes, repos_key);
> I'd recommend using r->connection->pool's userdata instead of the notes.
> Tables are not meant to store binary values; I'm not sure that it is very
> reliable.

Oh come one, live a little ;).

Sidenote: we need to fix the apr docs:

 * Tables are used to store entirely opaque structures
 * for applications, while Arrays are usually used to
 * deal with string lists.

Ofcourse this isn't true when you are using the add/set functions, as oposed
to addn/setn, since those try to copy the data.

> You could (again) see corrupted data or cache misses.

Not in this case. No copying of the reference takes place. But I agree that
using the connection pools userdata is cleaner. New patch below.


Index: subversion/mod_dav_svn/repos.c
--- subversion/mod_dav_svn/repos.c (revision 6386)
+++ subversion/mod_dav_svn/repos.c (working copy)
@@ -1076,6 +1076,7 @@
   const char *repos_name;
   const char *relative;
   const char *repos_path;
+ const char *repos_key;
   const char *version_name;
   svn_error_t *serr;
   dav_error *err;
@@ -1181,15 +1182,27 @@
   /* Remember who is making this request */
   repos->username = r->user;

- /* open the SVN FS */
- serr = svn_repos_open(&(repos->repos), fs_path, r->pool);
- if (serr != NULL)
+ /* Cache open repository. Key it off by root_path, which should be more
+ * unique than the fs_path, given that two Locations may point to the
+ * same repository.
+ */
+ repos_key = apr_pstrcat(r->pool, "mod_dav_svn:", root_path, NULL);
+ apr_pool_userdata_get((void **)&repos->repos, repos_key, r->connection->pool);
+ if (repos->repos == NULL)
- return dav_svn_convert_err(serr, HTTP_INTERNAL_SERVER_ERROR,
- apr_psprintf(r->pool,
- "Could not open the SVN "
- "filesystem at %s",
- fs_path));
+ serr = svn_repos_open(&(repos->repos), fs_path, r->connection->pool);
+ if (serr != NULL)
+ {
+ return dav_svn_convert_err(serr, HTTP_INTERNAL_SERVER_ERROR,
+ apr_psprintf(r->pool,
+ "Could not open the SVN "
+ "filesystem at %s",
+ fs_path));
+ }
+ /* Cache the open repos for the next request on this connection */
+ apr_pool_userdata_set(repos->repos, repos_key,
+ NULL, r->connection->pool);

   /* cache the filesystem object */

To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Thu Jul 3 01:21:16 2003

This is an archived mail posted to the Subversion Dev mailing list.

This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.