I have some questions about how Berkeley DB is designed to perform in
I've been assuming that Berkeley DB manages very large record values
(say, gigabytes long) roughly as efficiently as the Unix filesystem
would, and that manipulating values like that using the partial record
access stuff (http://www.sleepycat.com/docs/ref/am/partial.html) would
be roughly as efficient as using seek, read, and write on a Unix file.
Is this actually the case?
Berkeley DB's partial record functions actually go beyond what seek,
read, and write offer, in that you can replace a section of a record
with text of a different length, thus effectively inserting or
deleting text from a record, as if it were a text editing buffer. How
efficient are those operations? If I just insert some bytes in the
middle of a large record, does it rewrite the entire tail of the
I'm sure you folks would fix any bugs we might find; I'm asking about
the performance you'd expect to see from your data structures,
assuming the implementation is correct.
Received on Sat Oct 21 14:36:28 2006