> -----Original Message-----
> From: Philip Martin [mailto:philip.martin_at_wandisco.com]
> Sent: donderdag 18 februari 2010 19:22
> To: dev_at_subversion.apache.org
> Subject: Is sqlite fast enough?
>
> How fast are sqlite queries be compared to the old-wc entry caching?
> Operations like update benefit from faster locking, but in old-wc status
does
> no locking at all. Also the multiple entries files give the data a
tree-like
> organisation that is not modelled quite so well by an SQL table. Is
sqlite going
> to be at least as fast as old-wc at running status when the OS cache is
hot?
We assume it is as fast and for several operations it is faster.
In the old, nearly unmaintainable WC-1.0 code we didn't use one entry
cache.. but in several common cases 3 separate ones that weren't kept in
sync.
Issues like that made the code less and less maintainable.
Over running the entire testsuite we were about 50% slower than 1.6 around
September 2009, while we performed more tests and include entry dumping in
the tests. Since then we do doubled property handling. (Both in-db and
in-files).
This all with several cachings from the entries removed. No single database
yet and absolutely no performance tuning.
Just moving to the pristine store and moving the properties will bring us
around the same performance as 1.6 on our testsuite and then we haven't
looked at the gains by not reading all information at once every time.
To get one svn_wc_entry_t instance, we have to read BASE, ACTUAL and WORKING
for every node in the same directory and in most cases also for the parents
of the current directory to just fill in the entries values.
(And then I forget the information in the parent stubs that we also need)
We generally feel that we will be ok and most likely better off with WC-NG.
Besides that the entries were far from maintainable now. Yes we could add a
flag here and there. But every flag made subversion slower and less
maintainable. And it doesn't fix the real issues as being able to update
under a conflict, being able to revert all operations, handle case
insensitive filesystems, etc. etc.
Most of these issues weren't fixable without redesigning the storage layer.
> I've been vaguely assumming that it would be OK but this week I felt it
was
> time to investigate.
> What would we do in 3 months time if wc-ng turns out to be an order of
> magnitude slower than old-wc?
The current state is 50% slower and I don't see any reason why it would get
any slower when we move to a single DB or a more stable pristine store that
needs less file operations.
We still use far too many svn_wc_entry_t operations and I don't see these
disappear over just a few weeks. I would be very surprised if we get to a
single database before the second half of April. (Feel free to prove me
wrong on this ;-)
If it is slower than the older version we have code that we can tune further
and I'm 100% sure that we get a faster Subversion working copy on Windows.
(Just losing the lock slowdown is such a difference that it can't get slower
for large working copies).
WC-1.0 was beyond the point where we could tune it. I'm sure we can get
WC-NG database a few times faster than it is now by better tuning and
profiling. But it doesn't seem that it is necessary to do that to see huge
performance gains against 1.6. (And I really hope we see that tuning there
is not the right place because processing the actual in WC files takes the
largest share of the processing time)
> I need a big working copy to make status take a significant amount of
time, so
> I used a shell script[1] to create about 100,000 files in about 900
directories.
> Then I wrote a python script[2] to create and populate a single sqlite
> database with rows representing a similar set of files and directories.
Finally I
> wrote a simple C client[3] to query the sqlite database in manner that
> vaguely approximates status.
How did you use transactions?
See the sqlite documentations. Using transactions the right way changes
performance characteristics gigantically. We can't use most of these
optimizations before we get to a single database. (Transactions help of
course on writing, but the read performance of operations that quickly
follow each other increases considerably too)
>
> Initially result were not encouraging. On my machine 'svn status'
> using 1.6 takes about 1.1 seconds and the sqlite client took nearly 60
seconds
> to query the database, and most of that was in sqlite3_step.
> That sort of slowdown would be disastrous. After some thought I realised
I
> needed another index:
>
> create unique index i_local on base_node (wc_id, local_relpath)
I don't think we have more than a few baseic indexes yet. (Or transactions
around the performance critical code blocks for that matter). We postponed
that work to future performance tuning.
> With that added the sqlite client completes the queries in about 0.35 of a
> second, comfortably faster than the 1.6 status. Please note that the
sqlite
> client is only reading the base_node table, it's not doing
> stat() on any working files or reading working_node or actual_node.
> Also the base_node table in my test has fewer columns than wc-ng will use.
> As a final test I patched[4] my 1.6 build to remove most of the querying
of
> the working files. With this change 'svn status' goes down to about 0.55
of a
> second, still slower than the sqlite client.
>
> My major concern that sqlite might be an order of magnitude slower than
> old-wc is gone. Obviously the locking operations are likely to be faster
and
> the read-only operations look good too, although the margin is not that
> great.
In a typical SVN status call some files in the working copy are changed and
need a compare against the file in the pristine store. Especially if the
files use keywords and/or eol settings. Most of a svn status disk io is in
these scans; not in accessing the entry/node/tree state.
So even if it would be slower for these simple reads I don't think you would
see that. The better caching (and read ahead characteristics) of a single
file compared to hundreds or thousands of small (< sector-size) sized files
we had in WC-1.0 should hide any difference in real world usage.
Real performance measurements will require a lot more work in libsvn_wc. We
are nowhere near the point where we can calculate the final speed. But
almost every change we make now makes WC-NG faster while we spend the first
half of 2009 making WC-NG slower by removing entry cache specifics.
Bert
>
>
> [1] script to create a large working copy
>
> #!/bin/sh
>
> svnadmin=svnadmin
> svn=svn
> url=file://`pwd`/repo
>
> rm -rf repo wc empty
> $svnadmin create repo
> $svn mkdir -mm $url/trunk
> touch empty
> for i in `seq 0 99` ; do $svn import -mm empty $url/trunk/foo$i; done rm
> empty for i in `seq 0 29` ; do $svn cp -mm -r101 $url/trunk
$url/trunk/zag$i;
> done for i in `seq 0 29` ; do $svn cp -mm -r131 $url/trunk
$url/trunk/zig$i;
> done $svn co $url/trunk wc
>
> [2] script to create a sqlite database
>
> #!/usr/bin/python
>
> import os, sqlite3
>
> try: os.remove('wcx.db')
> except: pass
>
> c = sqlite3.connect('wcx.db')
> c.execute("""create table repository (
> id integer primary key autoincrement,
> root text unique not null,
> uuid text not null)""")
> c.execute("""create index i_uuid on repository (uuid)""")
> c.execute("""create index i_root on repository (root)""")
> c.execute("""create table wcroot (
> id integer primary key autoincrement,
> local_abspath text unique)""") c.execute("""create unique
index
> i_local_abspath on wcroot (local_abspath)""") c.execute("""create table
> base_node (
> wc_id integer not null references wcroot (id),
> local_relpath text not null,
> repos_id integer references repository (id),
> repos_relpath text,
> parent_relpath text,
> presence text not null,
> kind text not null)""")
> c.execute("""create unique index i_local on base_node (wc_id,
> local_relpath)""") c.execute("""create index i_parent on base_node (wc_id,
> parent_relpath)""") c.execute("""insert into repository (root, uuid)
values (
> "http://example.com/repo",
> "f738be9e-409d-481f-b246-1fb6a969aba2")""")
> c.execute("""insert into wcroot(local_abspath) values ("/wc")""")
>
> c.execute("""insert into base_node (
> wc_id,
> local_relpath,
> repos_id,
> repos_relpath,
> parent_relpath,
> presence,
> kind)
> values (
> 1,
> "",
> 1,
> "trunk",
> NULL,
> "normal",
> "dir")""")
>
> for i in range(100):
> c.execute("""insert into base_node (
> wc_id,
> local_relpath,
> repos_id,
> repos_relpath,
> parent_relpath,
> presence,
> kind)
> values (
> 1,
> "foo"""+str(i)+"""",
> 1,
> "trunk/foo"""+str(i)+"""",
> "",
> "normal",
> "file")""")
> if i >= 30:
> continue;
> c.execute("""insert into base_node (
> wc_id,
> local_relpath,
> repos_id,
> repos_relpath,
> parent_relpath,
> presence,
> kind)
> values (
> 1,
> "zag"""+str(i)+"""",
> 1,
> "trunk/zag"""+str(i)+"""",
> "",
> "normal",
> "dir")""")
> c.execute("""insert into base_node (
> wc_id,
> local_relpath,
> repos_id,
> repos_relpath,
> parent_relpath,
> presence,
> kind)
> values (
> 1,
> "zig"""+str(i)+"""",
> 1,
> "trunk/zig"""+str(i)+"""",
> "",
> "normal",
> "dir")""")
>
> for j in range(100):
> c.execute("""insert into base_node (
> wc_id,
> local_relpath,
> repos_id,
> repos_relpath,
> parent_relpath,
> presence,
> kind)
> values (
> 1,
> "zag"""+str(i)+"/foo"+str(j)+"""",
> 1,
> "trunk/zag"""+str(i)+"/foo"+str(j)+"""",
> "zag"""+str(i)+"""",
> "normal",
> "file")""")
> c.execute("""insert into base_node (
> wc_id,
> local_relpath,
> repos_id,
> repos_relpath,
> parent_relpath,
> presence,
> kind)
> values (
> 1,
> "zig"""+str(i)+"/foo"+str(j)+"""",
> 1,
> "trunk/zig"""+str(i)+"/foo"+str(j)+"""",
> "zig"""+str(i)+"""",
> "normal",
> "file")""")
> if j >= 30:
> continue
> c.execute("""insert into base_node (
> wc_id,
> local_relpath,
> repos_id,
> repos_relpath,
> parent_relpath,
> presence,
> kind)
> values (
> 1,
> "zig"""+str(i)+"/zag"+str(j)+"""",
> 1,
> "trunk/zig"""+str(i)+"/zag"+str(j)+"""",
> "zig"""+str(i)+"""",
> "normal",
> "dir")""")
> for k in range(100):
> c.execute("""insert into base_node (
> wc_id,
> local_relpath,
> repos_id,
> repos_relpath,
> parent_relpath,
> presence,
> kind)
> values (
> 1,
>
"zig"""+str(i)+"/zag"+str(j)+"/foo"+str(k)+"""",
> 1,
>
"trunk/zig"""+str(i)+"/zag"+str(j)+"/foo"+str(k)+"""",
> "zig"""+str(i)+"/zag"+str(j)+"""",
> "normal",
> "file")""")
>
> c.commit()
>
> [3] client to query the database
>
> #include "svn_pools.h"
> #include "svn_sqlite.h"
> #include <stdio.h>
>
> static svn_error_t *
> status_query(svn_sqlite__db_t *sdb,
> const char *local_relpath,
> svn_boolean_t display,
> apr_pool_t *pool)
> {
> svn_sqlite__stmt_t *stmt;
> svn_boolean_t have_row;
> const char *kind;
> apr_pool_t *subpool;
> apr_array_header_t *subdirs = apr_array_make(pool, 10, sizeof(const char
> *));
> int i;
>
> SVN_ERR(svn_sqlite__get_statement(&stmt, sdb, 0));
> SVN_ERR(svn_sqlite__bindf(stmt, "is", 1, local_relpath));
> SVN_ERR(svn_sqlite__step(&have_row, stmt));
> if (!have_row)
> {
> SVN_ERR(svn_sqlite__reset(stmt));
> return SVN_NO_ERROR;
> }
> kind = svn_sqlite__column_text(stmt, 0, pool);
> if (display)
> printf("%s %s\n", local_relpath, kind);
> SVN_ERR(svn_sqlite__reset(stmt));
>
> if (!strcmp(kind, "dir"))
> {
> SVN_ERR(svn_sqlite__get_statement(&stmt, sdb, 1));
> SVN_ERR(svn_sqlite__bindf(stmt, "is", 1, local_relpath));
> SVN_ERR(svn_sqlite__step(&have_row, stmt));
> while (have_row)
> {
> const char *child_relpath = svn_sqlite__column_text(stmt, 0,
> NULL);
> kind = svn_sqlite__column_text(stmt, 1, NULL);
> if (!strcmp(kind, "dir"))
> APR_ARRAY_PUSH(subdirs, const char *)
> = apr_pstrdup(pool, child_relpath);
> else if (display)
> printf("%s %s\n", child_relpath, kind);
> SVN_ERR(svn_sqlite__step(&have_row, stmt));
> }
> SVN_ERR(svn_sqlite__reset(stmt));
> }
>
> subpool = svn_pool_create(pool);
> for (i = 0; i < subdirs->nelts; ++i)
> {
> svn_pool_clear(subpool);
> SVN_ERR(status_query(sdb, APR_ARRAY_IDX(subdirs, i, const char*),
> display,
> subpool));
> }
> svn_pool_destroy(subpool);
>
> return SVN_NO_ERROR;
> }
>
> int main()
> {
> apr_pool_t *pool;
> svn_sqlite__db_t *sdb;
> const char * const statements[] = {
> "select kind from base_node" \
> " where wc_id = ?1 and local_relpath = ?2;",
> "select local_relpath, kind from base_node" \
> " where wc_id = ?1 and parent_relpath = ?2;",
> NULL
> };
>
> apr_initialize();
> pool = svn_pool_create(NULL);
> SVN_INT_ERR(svn_sqlite__open(&sdb, "wcx.db",
> svn_sqlite__mode_rwcreate,
> statements, 0, NULL, pool, pool));
> SVN_INT_ERR(status_query(sdb, "", FALSE, pool));
>
> return EXIT_SUCCESS;
> }
>
> [4] patch for 1.6
>
> Index: subversion/libsvn_wc/status.c
> ==========================================================
> =========
> --- subversion/libsvn_wc/status.c (revision 905665)
> +++ subversion/libsvn_wc/status.c (working copy)
> @@ -272,6 +272,7 @@
>
> svn_lock_t *repos_lock = NULL;
>
> +#if 0
> /* Check for a repository lock. */
> if (repos_locks)
> {
> @@ -302,6 +303,7 @@
> SVN_ERR(svn_wc__get_tree_conflict(&tree_conflict, path, adm_access,
> pool));
>
> if (! entry)
> +#endif
> {
> /* return a fairly blank structure. */
> stat = apr_pcalloc(pool, sizeof(*stat)); @@ -316,25 +318,7 @@
> stat->tree_conflict = tree_conflict;
> stat->file_external = FALSE;
>
> - /* If this path has no entry, but IS present on disk, it's
> - unversioned. If this file is being explicitly ignored (due
> - to matching an ignore-pattern), the text_status is set to
> - svn_wc_status_ignored. Otherwise the text_status is set to
> - svn_wc_status_unversioned. */
> - if (path_kind != svn_node_none)
> - {
> - if (is_ignored)
> - stat->text_status = svn_wc_status_ignored;
> - else
> - stat->text_status = svn_wc_status_unversioned;
> - }
> -
> - /* If this path has no entry, is NOT present on disk, and IS a
> - tree conflict victim, count it as missing. */
> - if ((path_kind == svn_node_none) && tree_conflict)
> - stat->text_status = svn_wc_status_missing;
> -
> - stat->repos_lock = repos_lock;
> + stat->repos_lock = NULL;
> stat->url = NULL;
> stat->ood_last_cmt_rev = SVN_INVALID_REVNUM;
> stat->ood_last_cmt_date = 0;
>
> --
> Philip
Received on 2010-02-18 21:03:42 CET