Stefan Fuhrmann <stefanfuhrmann_at_alice-dsl.de> writes:
> further reducing my backlog of patches sitting in my
> working copy, this and the next patch optimize code
> locally - shaving off cycles here and there. The net
> effect is somewhere between 3 and 10 percent
> for repository access (ls, export, etc.).
>
> In this patch, I eliminated calls to memcpy for small
> copies as they are particularly expensive in the MS CRT.
For gcc (on Linux at least) memcpy is automatically inlined for small
copies. Obscuring the memcpy could well result in worse code.
> @@ -594,31 +636,46 @@
> semantics aren't guaranteed for overlapping memory areas,
> and target copies are allowed to overlap to generate
> repeated data. */
> - assert(op->offset < tpos);
> - for (i = op->offset, j = tpos; i < op->offset + buf_len; i++)
> - tbuf[j++] = tbuf[i];
Why are we not using memmove?
> --- subversion/libsvn_subr/svn_string.c (revision 937673)
> +++ subversion/libsvn_subr/svn_string.c (working copy)
> @@ -391,20 +391,34 @@
> apr_size_t total_len;
> void *start_address;
>
> - total_len = str->len + count; /* total size needed */
> + /* This function is frequently called by svn_stream_readline
> + adding one char at a time. Eliminate the 'evil' memcpy in
> + that case unless the buffer must be resized. */
>
If we use it a lot then optimising for a single byte might be
worthwhile. Perhaps we should write svn_stringbuf_appendbyte?
--
Philip
Received on 2010-04-26 11:23:58 CEST