|
Top Document: Comp.os.research: Frequently answered questions [2/3: l/m 13 Aug 1996] Previous Document: [2.3.1] File sizes Next Document: [2.3.3] Inode ratios See reader questions & answers on this topic! - Help others by sharing your knowledge
From: Performance and workload studies
The last block of a file is normally only partially occupied, and so
as block sizes are increased so too will the the amount of wasted disk
space.
The following historical values for the design of the BSD FFS are
given in `Design and implementation of the 4.3BSD Unix operating
system':
fragment size overhead
(bytes) (%)
512 4.2
1024 9.1
2048 19.7
4096 42.9
Files have clearly gotten larger since then; I obtained the following
results:
fragment size overhead
(bytes) (%)
128 0.3
256 0.6
512 1.1
1024 2.5
2048 5.4
4096 12.3
8192 27.8
16384 61.2
By default the BSD FFS typically uses a 1k fragment size. Perhaps
this size is no longer optimal and should be increased.
(The FFS block size is constrained to be no more than 8 times the
fragment size. Clustering is a good way to improve throughput for
FFS based file systems, but it doesn't do very much to reduce the not
insignificant FFS computational overhead.)
It is interesting to note that even though most files are less than 2K
in size, having a 2K block size wastes very little space, because disk
space consumption is so totally dominated by large files.
User Contributions: 1 UoowNen ⚠ buy zithromax online https://zithromaxazitromycin.com/ - buy zithromax online zithromax online https://zithromaxazitromycin.com/ - buy zithromax Comment about this article, ask questions, or add new information about this topic:Top Document: Comp.os.research: Frequently answered questions [2/3: l/m 13 Aug 1996] Previous Document: [2.3.1] File sizes Next Document: [2.3.3] Inode ratios Part1 - Part2 - Part3 - Single Page [ Usenet FAQs | Web FAQs | Documents | RFC Index ] Send corrections/additions to the FAQ Maintainer: os-faq@cse.ucsc.edu
Last Update March 27 2014 @ 02:12 PM
|
