Florian Weimer
2008-07-16 19:40:48 UTC
I tested this again after a couple of years, and the behavior doesn't
seem to have changed: If a Berkeley DB database is written using TDS
with a reasonably sized cache, data is written from the cache to the
file system in what a appears to be a random fashion. Apparently, a lot
of holes are created, which are then filled. This degrades file system
performance and makes hot backups somewhat difficult (because the read
performance is a fraction of that what can actually achieved).
Is there still no way to preallocate the contents of B-tree files?
(Without TDS, the problem disappears, it seems to be related TDS or the
cache size.)
seem to have changed: If a Berkeley DB database is written using TDS
with a reasonably sized cache, data is written from the cache to the
file system in what a appears to be a random fashion. Apparently, a lot
of holes are created, which are then filled. This degrades file system
performance and makes hot backups somewhat difficult (because the read
performance is a fraction of that what can actually achieved).
Is there still no way to preallocate the contents of B-tree files?
(Without TDS, the problem disappears, it seems to be related TDS or the
cache size.)