The recent Linux File System FSCK Test Results article stirred up some questions and a few comments. I wanted to discuss the results a little in the hope of answering these questions and addressing some of the comments. A full-length follow-on article is planned that will analyze the results in more depth.
Once we collected the results of the tests, we compared them to other file system check results, primarily the results presented by Ric Wheeler, the file system lead at Red Hat. The number of files checked per second in the results was in line with the results that Ric has posted.
One comment that a few people raised was that the test was unrealistic and either didn't fragment the file system or didn't "damage" the file system in some manner and/or didn't represent real file systems. Recall that the purpose of the testing was to establish how long a file system check would take on a "clean" file system. In some ways this gives you an ideal result.
I have also commented that it is very difficult to find tests that fragment the file system, or damage it, or lay out the files in a repeatable manner. If I had chosen to fill the file system in some arbitrary way there would still be complaints that the test didn't represent a "real" situation (which typically means it didn't represent their data layout). I understand this complaint, but there are no tools, established guidelines, or techniques for doing this type of testing. Therefore, any type of testing along these lines would have been arbitrary and the results would not have really illustrated much. However, testing what you might call an "ideal" file system check can provide some useful information.
Look for an upcoming article that analyzes these results a bit more closely.