On Sat, Feb 5, 2011 at 1:05 PM, Erik Huelsmann <ehuels_at_gmail.com> wrote:
> Scenario (2) takes ~0.27 seconds to evaluate in the unmodified
> database. Adding an index on (wc_id, local_relpath) makes the
> execution time drop to ~0.000156 seconds!
>
>
> Seems Philip was right :-) We need to carefully review the indices we
> have in our database to support good performance.
I wish this document were fully fleshed out, it seems like it has some
good info in it:
http://web.utk.edu/~jplyon/sqlite/SQLite_optimization_FAQ.html
Getting indexes in place for the bulk of our reads is essential. It
seems like now would be a good time to make that a priority. Of
course adding more indexes will further slow down write speed (which
seems bad already) so maybe the above document will give ideas for
other optimizations.
Did anyone see the tests I posted on users@ of a checkout with 5000
files in single folder? I really thought we would be faster than 1.6
already but we are actually several factors slower.
My background is all with DB2 on OS/400. Something I was looking for
in SQLite docs is whether it uses hints for the number of rows in a
table. For example, DB2 optimizes a new table for 10,000 rows with
increments of 1,000 when you reach the limit. If you know you are
inserting 100,000 rows you can get a massive performance improvement
by telling DB2 to optimize for a larger size. I was wondering if
SQLite was doing something like optimizing for 100 rows or something
small. I noticed the end of the checkout is really slow which implies
it does not insert the rows fast. Maybe this is just an area where we
need to use transactions better?
Anyway, a big +1 on getting the right indexes in place. I know SQLite
has an EXPLAIN statement. Not sure if there are tools you can use to
just capture information and have it tell you the indexes you needed.
On databases like DB2 there are tools like that available and it can
save time. In fact you could almost remove all indexes and run some
tests to let the db tell you what indexes you needed. Of course our
test suite probably does not have enough data in the db to make
indexes any faster than a table scan, so you would probably have to do
manual testing using a large working copy to see what you need.
--
Thanks
Mark Phippard
http://markphip.blogspot.com/
Received on 2011-02-05 20:25:54 CET