The more I think about it, the more I believe the POSIX I/O consistency model will become part of our past. People programming in C, C++ and other languages besides Java (Java is the language of choice for Hadoop) all have to deal with the problem when multiple threads or processes open the same file and writing or writing and reading at the same time.
The problem is that databases solved this problem years ago. Early on, most databases ran to the raw device because running to a file system was too slow, given POSIX locking. Then came the Veritas Database Edition, which removed the POSIX locking and allows the application to control the overlap. Databases were designed to manage their own consistency so they did not need the POSIX consistency model.
My question is, is it time to move the consistency model out of the file system since those that control the POSIX I/O consistency model have no intention of changing the standard? I believe the time has come to consider moving the consistency model into the application and making the application control the I/O overlap.
To me, doing it in the file systems is a non-starter as we move forward. There are no changes planned, and the current method limits performance and scaling. All we will need is the ability for a file system to ignore the POSIX consistency model, which has been available for, I think, almost 15 years.
This is not new technology, and it has been done in other file systems. We need user level tools to look at the overlap. We can then all ignore what the OpenGroup give us.