On Mon, May 21, 2018 at 4:05 PM Davor Josipovic <davorj_at_live.com>
wrote:
>>I have finally found the culprit. By coincidence, I had the same
>>configuration running on two servers, one backed with a SSD, the
>>other with a HDD.
>>
>>The same commit worked on the SSD-backed server, and always failed
>>on the HDD one with the described error.
>>
>>Monitoring the process revealed that after the file transfer of the
>>commit completes, server starts finalizing. Not sure what exactly
>>happens during that period serverside (maybe someone can explain?),
>>but for a 40k files commit, it took the SSD-backed one to complete
>>in about 30 seconds, while the HDD-backed one struggled more than
>>30 minutes. During those 30 minutes, the client got no response, so
>>it timed out, which resulted in the described error on the server side.
>(Snip)
>
>
>Now that is interesting. 40k doesn't seem to be such a large amount
>of data for modern computers. Very slow and fragmented hard drive? Or
>perhaps there's something else going on that is manifesting this way?
>
I think he meant that he has committed 40.000 individual (small) files
in one go. That is a *lot* of files and in a large capacity HDD it
means wasting a lot of space too since every small file will uses up a
full cluster of disk space. I believe it can be improved (if svn uses
fsfs) by packing the database.
Please see:
http://svnbook.red-bean.com/nightly/en/svn.reposadmin.maint.html#svn.reposadmin.maint.diskspace.fsfspacking
--
Bo Berglund
Developer in Sweden
Received on 2018-05-22 07:14:20 CEST