On Jun 13, 2019, at 12:36 PM, Kent Overstreet wrote: > > On Thu, Jun 13, 2019 at 09:02:24AM +1000, Dave Chinner wrote: >> On Wed, Jun 12, 2019 at 12:21:44PM -0400, Kent Overstreet wrote: >>> Ok, I'm totally on board with returning EDEADLOCK. >>> >>> Question: Would we be ok with returning EDEADLOCK for any IO where the buffer is >>> in the same address space as the file being read/written to, even if the buffer >>> and the IO don't technically overlap? >> >> I'd say that depends on the lock granularity. For a range lock, >> we'd be able to do the IO for non-overlapping ranges. For a normal >> mutex or rwsem, then we risk deadlock if the page fault triggers on >> the same address space host as we already have locked for IO. That's >> the case we currently handle with the second IO lock in XFS, ext4, >> btrfs, etc (XFS_MMAPLOCK_* in XFS). >> >> One of the reasons I'm looking at range locks for XFS is to get rid >> of the need for this second mmap lock, as there is no reason for it >> existing if we can lock ranges and EDEADLOCK inside page faults and >> return errors. > > My concern is that range locks are going to turn out to be both more complicated > and heavier weight, performance wise, than the approach I've taken of just a > single lock per address space. > > Reason being range locks only help when you've got multiple operations going on > simultaneously that don't conflict - i.e. it's really only going to be useful > for applications that are doing buffered IO and direct IO simultaneously to the > same file. Personally, I think that would be a pretty gross thing to do and I'm > not particularly interested in optimizing for that case myself... but, if you > know of applications that do depend on that I might change my opinion. If not, I > want to try and get the simpler, one-lock-per-address space approach to work. There are definitely workloads that require multiple threads doing non-overlapping writes to a single file in HPC. This is becoming an increasingly common problem as the number of cores on a single client increase, since there is typically one thread per core trying to write to a shared file. Using multiple files (one per core) is possible, but that has file management issues for users when there are a million cores running on the same job/file (obviously not on the same client node) dumping data every hour. We were just looking at this exact problem last week, and most of the threads are spinning in grab_cache_page_nowait->add_to_page_cache_lru() and set_page_dirty() when writing at 1.9GB/s when they could be writing at 5.8GB/s (when threads are writing O_DIRECT instead of buffered). Flame graph is attached for 16-thread case, but high-end systems today easily have 2-4x that many cores. Any approach for range locks can't be worse than spending 80% of time spinning. Cheers, Andreas