Earlier last month (April 1 and 2), I was in San Francisco to attend 2012 Linux Storage and Filesystem Summit. It was a great experience for me because I am a great fan of many of these kernel maintainers who have in-depth understanding of file systems and storage, and sitting under the same roof, discussing cutting edge technology topics, is just what I've been dreaming of for many years.
OK, enough stupid wording. Here is my summary for this event.
The LSF/MM summit is a small by-invitation summit which focuses on collaboration and implementation. I was very lucky to be invited because of the work that I am doing in EMC, pushing Lustre client into mainline kernel. Also it is worth mentioning that EMC is silver sponsor for the event.
The LSF/MM summit is a two-day event, and consists of three tracks (IO, FS and MM). The complete schedule can be found here. I stayed in the filesystem track all the time so my summary will be mainly about discussions in the filesystem track. For discussions on IO and MM tracks, they can be found here and here.
Runtime filesystem consistency check
It is a FAST paper written by Ashvin Goel and others from the University of Toronto. The main idea is to record some consistency invariants and check them between the filesystem and the block layer, so that errors can be found earlier at transaction commit point. The consistency invariants are predefined and there are 33 for ext3 in Recon, the PoC of runtime filesystem checker they built. A more detailed introduction of Recon can be found here.
Wu Fengguang started writeback discussion with his work on improving the writeback
situation. Then he concluded his work on IO-less throttling, where the main intentions is to minimize seeks, get less lock contention and cache bouncing, lower latency, with impressive performance gains, minor regressions and lots of algorithm complexity.
For direct reclaim, pageout work has been moved from direct reclaim to flusher threads. It also focuses dirty reclaim waits on dirtier tasks for the benefit of interactive performance. Dirty pages in the end of LRU are still a problem because scanning for them wastes lots of CPU. The he suggested adding a new LRU list just for tracking dirty page, and it requires a new page flag.
Memory control groups have its problem with dirty limit mainly because there is only a global dirty limit, and flusher fairness is beyond control. There are only coarse options available such as limiting the number of operations that
can be performed on a per-inode basis or limiting the amount of IO that
can be submitted.
The discussion then moved to buffer write blkcg IO control. Current blkcg IO control is useless with regard to buffer write, because blkcg throttles at summit_io(), where there is mostly no context of the writer. Fengguang made a suggestion and RFC patchsets to implement buffer write IO control in balance_dirty_pages. However Tejun Heo argued that blkcg should do its work in block layer, instead of messing up with mm layer. Also there are comments that the algorithm of balance_dirty_pages() is already very complex, doing IO controlling there will make it even more difficult to understand. The dis-agreement has been there for a long time and it couldn't reach conclusion soon so people askes the discussion to continue in MM track and it later moves on on mailing list.
Writeback and Stable Pages
The same topic was discussed in last year's LSF and the conclusion was to let writer wait when it wants to write a page that is under writeback. However Ted Ts'o reported ext4 long latency when Google started to use the code. In brief, waiting page writeback (to get stable pages) can lead to large process latency. And it is not necessary for every system. Stable pages are only required for some systems where
things like checksums calculated on the page require that the page be
unchanged when it actually gets written.
Sage Weil and Boaz Harrosh made three options to handle the situation:
1. re-issue pages that are changed during IO;
2. wait on writeback for every page (current implementation);
3. Do a COW on the page under writeback when it is written to.
The first option was dropped instantly because it confuses storage that need stable page and is purely overhead for storage that doesn't need it.
The third option was discussed on but the overhead of COW is unknown and there are some corner cases that need to be addressed like what to do if file is truncated after COW page is created. So the first step as suggested is to introduce some APIs to let storage tell filesystem if it needs stable page, and let filesystem tell storage if storage is supported. Then for cases where stable pages are unnecessary like Google's use, file system doesn't need to do anything to send stable pages. As for stable page support, some reporting method should be added to writeback code path to find out what workload are being affected and what those affects are. Then someone can propose on how to implement the COW solution and address all the corner cases.
NetApp's Frederick Knight led copy offload session. The idea of copy offload is to allow SCSI devices to
copy ranges of blocks without involving the host operating system. Copy offload has been in SCSI standard since SCSI-1 days. EXTENDED COPY (XCOPY) uses two
descriptors for the source and destination and a range of blocks.
It can be implemented in either push model (source sends the blocks to the
target) or pull model (target pulls from source).
TOKEN based copy is far more complex. Each token has a ROD (Representation of Data) that allows arrays to give an identifier for what may be a snapshot. A token represents a device and a range of sectors that the device guarantees to be stable. However, if the device doesn't support snapshotting and the region gets overwritten for any reason, the token can be declined by storage. It means that storage users have no idea of the lifetime of a token, and every time a token goes invalid, they need to either renew the token or do real data transfer.
Token based copy is somewhat similar to NFS's server side copy (NFSv4.2 draft). And it is suggested that token format need to be standardized to possibly allow copy offload from SCSI disk to CIFS/NFS volume.
Kernel AIO/DIO Interfaces
The first session in the afternoon was led by Dave Kleikamp who is trying to modify kernel AIO/DIO APIs to provide in-kernel interfaces. He changed iov_iter to make it handle both iovec (from userspace) and bio_vec (from kernel space). He modified loopback device to let it send AIO and therefore avoid caching in underlying filesystem. And people suggested that swap-over-NFS can be adapted to use the same API.
RAID engine Unification
Boaz implemented a generic RAID engine for pNFS object layout driver. Since the code is simple and efficient, he wants to push its usage and unify kernel RAID implementations. Currently besides Boaz's RAID implementation, there are two other RAID implementations, MD and btrfs. Boaz said that his implementation is more complete and support RAID stacking without extra data copy.
However, it seems the benefit is not so obvious and people are hesitating to adopt it because current code works just good. Chris Mason suggested Boaz to start with MD because MD is much simpler than btrfs. If that works well, he can continue to change btrfs to use the new RAID engine.
With many years of advocating, xfstests has somehow become the most used regression test suit for not just xfs, but also ext4 and btrfs. There are ~280 test cases and around 100 of them are filesystem independent. However, one nightmare is that all test files are numbered instead of properly named. So anyone who wants to use it need to read the test case and find out what it actually does. Also it need to be reorganized so that similar functions are grouped in a directory instead of lying flat like right now.
So much for the first day. Here are some pictures of attendees taken by Chuck Level: