linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Writing to FUSE via mmap extremely slow (sometimes) on some machines?
@ 2020-02-24 13:29 Michael Stapelberg
       [not found] ` <CACQJH27s4HKzPgUkVT+FXWLGqJAAMYEkeKe7cidcesaYdE2Vog@mail.gmail.com>
  0 siblings, 1 reply; 11+ messages in thread
From: Michael Stapelberg @ 2020-02-24 13:29 UTC (permalink / raw)
  To: fuse-devel, Miklos Szeredi, linux-fsdevel, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 1898 bytes --]

Hey,

I’m running into an issue where writes via mmap are extremely slow.
The mmap program I’m using to test is attached.

The symptom is that the program usually completes in 0.x seconds, but
then sometimes takes minutes to complete! E.g.:

% dd if=/dev/urandom of=/tmp/was bs=1M count=99

% ./fusexmp_fh /tmp/mnt

% time ~/mmap /tmp/was /tmp/mnt/tmp/stapelberg.1
Mapped src: 0x10000  and dst: 0x21b8b000
memcpy done
~/mmap /tmp/was /tmp/mnt/tmp/stapelberg.1  0.06s user 0.20s system 48%
cpu 0.544 total

% time   ~/mmap /tmp/was /tmp/mnt/tmp/stapelberg.1
Mapped src: 0x10000  and dst: 0x471fb000
memcpy done
~/mmap /tmp/was /tmp/mnt/tmp/stapelberg.1  0.05s user 0.22s system 0%
cpu 2:03.39 total

This affects both an in-house FUSE file system and also FUSE’s
fusexmp_fh from 2.9.7 (matching what our in-house FS uses).

While this is happening, the machine is otherwise idle. E.g. dstat shows:

--total-cpu-usage-- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai stl| read  writ| recv  send|  in   out | int   csw
  1   0  98   1   0|   0     0 |  19k   23k|   0     0 |  14k   27k
  1   0  98   1   0|   0     0 |  33k   53k|   0     0 |  14k   29k
  0   0  98   1   0|   0   176k|  27k   26k|   0     0 |  13k   25k
[…]

While this is happening, using cp(1) to copy the same file is fast (1
second). It’s only mmap-based writing that’s slow.

This is with Linux 5.2.17, but has been going on for years apparently.

I haven’t quite figured out what the pattern is with regards to the
machines that are affected. One wild guess I have is that it might be
related to RAM? The machine on which I can most frequently reproduce
the issue has 192GB of RAM, whereas I haven’t been able to reproduce
the issue on my workstation with 64GB of RAM.

Any ideas what I could check to further narrow down this issue?

Thanks,

[-- Attachment #2: mmap.c --]
[-- Type: text/x-csrc, Size: 2132 bytes --]

#include <sys/types.h>
#include <sys/stat.h>
#include <sys/mman.h> 
#include <fcntl.h>
#include <string.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>

/*
 * An implementation of copy ("cp") that uses memory maps.  Various
 * error checking has been removed to promote readability
 */

// Where we want the source file's memory map to live in virtual memory
// The destination file resides immediately after the source file
#define MAP_LOCATION 0x6100

int main (int argc, char *argv[]) {
 int fdin, fdout;
 char *src, *dst;
 struct stat statbuf;
 off_t fileSize = 0;

 if (argc != 3) {
   printf ("usage: a.out <fromfile> <tofile>\n");
   exit(0);
 }

 /* open the input file */
 if ((fdin = open (argv[1], O_RDONLY)) < 0) {
   printf ("can't open %s for reading\n", argv[1]);
   exit(0);
 }

 /* open/create the output file */
 if ((fdout = open (argv[2], O_RDWR | O_CREAT | O_TRUNC, 0600)) < 0) {
   printf ("can't create %s for writing\n", argv[2]);
   exit(0);
 }
 
 /* find size of input file */
 fstat (fdin,&statbuf) ;
 fileSize = statbuf.st_size;
 
 /* go to the location corresponding to the last byte */
 if (lseek (fdout, fileSize - 1, SEEK_SET) == -1) {
   printf ("lseek error\n");
   exit(0);
 }
 
 /* write a dummy byte at the last location */
 write (fdout, "", 1);
 
 /* 
  * memory map the input file.  Only the first two arguments are
  * interesting: 1) the location and 2) the size of the memory map 
  * in virtual memory space. Note that the location is only a "hint";
  * the OS can choose to return a different virtual memory address.
  * This is illustrated by the printf command below.
 */

 src = mmap ((void*) MAP_LOCATION, fileSize, 
	     PROT_READ, MAP_SHARED | MAP_POPULATE, fdin, 0);

 /* memory map the output file after the input file */
 dst = mmap ((void*) MAP_LOCATION + fileSize , fileSize , 
	     PROT_READ | PROT_WRITE, MAP_SHARED, fdout, 0);


 printf("Mapped src: 0x%x  and dst: 0x%x\n",src,dst);

 /* Copy the input file to the output file */
 memcpy (dst, src, fileSize);

 printf("memcpy done\n");

 // we should probably unmap memory and close the files
} /* main */

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [fuse-devel] Writing to FUSE via mmap extremely slow (sometimes) on some machines?
       [not found]         ` <CAJfpegtkEU9=3cvy8VNr4SnojErYFOTaCzUZLYvMuQMi050bPQ@mail.gmail.com>
@ 2020-03-03 10:34           ` Michael Stapelberg
  2020-03-03 13:04           ` Tejun Heo
  1 sibling, 0 replies; 11+ messages in thread
From: Michael Stapelberg @ 2020-03-03 10:34 UTC (permalink / raw)
  To: Miklos Szeredi; +Cc: Tejun Heo, Jack Smith, fuse-devel, linux-fsdevel, linux-mm

Tejun, friendly ping? Any thoughts on this? Thanks!

On Wed, Feb 26, 2020 at 9:00 PM Miklos Szeredi <miklos@szeredi.hu> wrote:
>
> Adding more CC and re-attaching the reproducer and the 25s log.
>
> On Wed, Feb 26, 2020 at 11:03 AM Michael Stapelberg
> <michael+lkml@stapelberg.ch> wrote:
> >
> > Find attached two logs:
> >
> > fuse-1s.log shows the expected case
> >
> > fuse-25s.log shows the issue. Note that the log spans 2 days. I
> > started the cp at 10:54:53.251395. Note how the first WRITE opcode is
> > only received at 10:55:18.094578!
>
> Observations:
>
> - apparently memcpy is copying downwards (from largest address to
> smallest address).  Not sure why, when I run the reproducer, it copies
> upwards.
> - there's a slow batch of reads of the first ~4MB of data, then a
> quick writeback
> - there's a quick read of the rest (~95MB) of data, then a quick
> writeback of the same
>
> Plots of the whole and closeups of slow and quick segments attached.
> X axis is time, Y axis is offset.
>
> Tejun, could this behavior be attributed to dirty throttling?  What
> would be the best way to trace this?
>
> Thanks,
> Miklos
>
>
> >
> > Is there some sort of readahead going on that’s then being throttled somewhere?
> >
> > Thanks,
> >
> > On Mon, Feb 24, 2020 at 3:23 PM Miklos Szeredi <miklos@szeredi.hu> wrote:
> > >
> > > On Mon, Feb 24, 2020 at 3:18 PM Michael Stapelberg
> > > <michael+lkml@stapelberg.ch> wrote:
> > > >
> > > > Sorry, to clarify: the hang is always in the memcpy call. I.e., the
> > > > “Mapped” message is always printed, and it takes a long time until
> > > > “memcpy done” is printed.
> > >
> > > Have you tried running the fuse daemon with debugging enabled?  Is
> > > there any observable difference between the fast and the slow runs?
> > >
> > > Thanks,
> > > Miklos

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [fuse-devel] Writing to FUSE via mmap extremely slow (sometimes) on some machines?
       [not found]         ` <CAJfpegtkEU9=3cvy8VNr4SnojErYFOTaCzUZLYvMuQMi050bPQ@mail.gmail.com>
  2020-03-03 10:34           ` [fuse-devel] " Michael Stapelberg
@ 2020-03-03 13:04           ` Tejun Heo
  2020-03-03 14:03             ` Michael Stapelberg
  1 sibling, 1 reply; 11+ messages in thread
From: Tejun Heo @ 2020-03-03 13:04 UTC (permalink / raw)
  To: Miklos Szeredi
  Cc: Michael Stapelberg, Jack Smith, fuse-devel, linux-fsdevel, linux-mm

Hello,

Sorry about the delay.

On Wed, Feb 26, 2020 at 08:59:55PM +0100, Miklos Szeredi wrote:
> - apparently memcpy is copying downwards (from largest address to
> smallest address).  Not sure why, when I run the reproducer, it copies
> upwards.
> - there's a slow batch of reads of the first ~4MB of data, then a
> quick writeback
> - there's a quick read of the rest (~95MB) of data, then a quick
> writeback of the same
> 
> Plots of the whole and closeups of slow and quick segments attached.
> X axis is time, Y axis is offset.
> 
> Tejun, could this behavior be attributed to dirty throttling?  What
> would be the best way to trace this?

Yeah, seems likely. Can you please try offcputime (or just sample
/proc/PID/stack) and see whether it's in balance dirty pages?

  https://github.com/iovisor/bcc/blob/master/tools/offcputime.py

If it's dirty throttling, the next step would be watching the bdp
tracepoints to find out what kind of numbers it's getting.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [fuse-devel] Writing to FUSE via mmap extremely slow (sometimes) on some machines?
  2020-03-03 13:04           ` Tejun Heo
@ 2020-03-03 14:03             ` Michael Stapelberg
  2020-03-03 14:13               ` Tejun Heo
  0 siblings, 1 reply; 11+ messages in thread
From: Michael Stapelberg @ 2020-03-03 14:03 UTC (permalink / raw)
  To: Tejun Heo; +Cc: Miklos Szeredi, Jack Smith, fuse-devel, linux-fsdevel, linux-mm

Here’s a /proc/<pid>/stack from when the issue is happening:

[<0>] balance_dirty_pages_ratelimited+0x2ca/0x3b0
[<0>] __handle_mm_fault+0xe6e/0x1280
[<0>] handle_mm_fault+0xbe/0x1d0
[<0>] __do_page_fault+0x249/0x4f0
[<0>] page_fault+0x1e/0x30

How can I obtain the numbers for the next step?

Thanks,

On Tue, Mar 3, 2020 at 2:04 PM Tejun Heo <tj@kernel.org> wrote:
>
> Hello,
>
> Sorry about the delay.
>
> On Wed, Feb 26, 2020 at 08:59:55PM +0100, Miklos Szeredi wrote:
> > - apparently memcpy is copying downwards (from largest address to
> > smallest address).  Not sure why, when I run the reproducer, it copies
> > upwards.
> > - there's a slow batch of reads of the first ~4MB of data, then a
> > quick writeback
> > - there's a quick read of the rest (~95MB) of data, then a quick
> > writeback of the same
> >
> > Plots of the whole and closeups of slow and quick segments attached.
> > X axis is time, Y axis is offset.
> >
> > Tejun, could this behavior be attributed to dirty throttling?  What
> > would be the best way to trace this?
>
> Yeah, seems likely. Can you please try offcputime (or just sample
> /proc/PID/stack) and see whether it's in balance dirty pages?
>
>   https://github.com/iovisor/bcc/blob/master/tools/offcputime.py
>
> If it's dirty throttling, the next step would be watching the bdp
> tracepoints to find out what kind of numbers it's getting.
>
> Thanks.
>
> --
> tejun

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [fuse-devel] Writing to FUSE via mmap extremely slow (sometimes) on some machines?
  2020-03-03 14:03             ` Michael Stapelberg
@ 2020-03-03 14:13               ` Tejun Heo
  2020-03-03 14:21                 ` Michael Stapelberg
  0 siblings, 1 reply; 11+ messages in thread
From: Tejun Heo @ 2020-03-03 14:13 UTC (permalink / raw)
  To: Michael Stapelberg
  Cc: Miklos Szeredi, Jack Smith, fuse-devel, linux-fsdevel, linux-mm

Hello,

On Tue, Mar 03, 2020 at 03:03:58PM +0100, Michael Stapelberg wrote:
> Here’s a /proc/<pid>/stack from when the issue is happening:
> 
> [<0>] balance_dirty_pages_ratelimited+0x2ca/0x3b0
> [<0>] __handle_mm_fault+0xe6e/0x1280
> [<0>] handle_mm_fault+0xbe/0x1d0
> [<0>] __do_page_fault+0x249/0x4f0
> [<0>] page_fault+0x1e/0x30
> 
> How can I obtain the numbers for the next step?

Yes, that's dirty throttling alright. Hopefully, the
balance_dirty_pages tracepoint which can be enabled from under
/sys/kernel/debug/tracing/events/writeback/balance_dirty_pages/ should
tell us why bdp thinks it needs throttling and then we can go from
there. Unfortunately, I'm rather preoccupied and afraid I don't have a
lot of bandwidth to work on it myself for the coming weeks.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [fuse-devel] Writing to FUSE via mmap extremely slow (sometimes) on some machines?
  2020-03-03 14:13               ` Tejun Heo
@ 2020-03-03 14:21                 ` Michael Stapelberg
  2020-03-03 14:25                   ` Tejun Heo
  0 siblings, 1 reply; 11+ messages in thread
From: Michael Stapelberg @ 2020-03-03 14:21 UTC (permalink / raw)
  To: Tejun Heo; +Cc: Miklos Szeredi, Jack Smith, fuse-devel, linux-fsdevel, linux-mm

[-- Attachment #1: Type: text/plain, Size: 1126 bytes --]

Find attached trace.log (cat /sys/kernel/debug/tracing/trace) and
fuse-debug.log (FUSE daemon with timestamps).

Does that tell you something, or do we need more data? (If so, how?)

Thanks,

On Tue, Mar 3, 2020 at 3:13 PM Tejun Heo <tj@kernel.org> wrote:
>
> Hello,
>
> On Tue, Mar 03, 2020 at 03:03:58PM +0100, Michael Stapelberg wrote:
> > Here’s a /proc/<pid>/stack from when the issue is happening:
> >
> > [<0>] balance_dirty_pages_ratelimited+0x2ca/0x3b0
> > [<0>] __handle_mm_fault+0xe6e/0x1280
> > [<0>] handle_mm_fault+0xbe/0x1d0
> > [<0>] __do_page_fault+0x249/0x4f0
> > [<0>] page_fault+0x1e/0x30
> >
> > How can I obtain the numbers for the next step?
>
> Yes, that's dirty throttling alright. Hopefully, the
> balance_dirty_pages tracepoint which can be enabled from under
> /sys/kernel/debug/tracing/events/writeback/balance_dirty_pages/ should
> tell us why bdp thinks it needs throttling and then we can go from
> there. Unfortunately, I'm rather preoccupied and afraid I don't have a
> lot of bandwidth to work on it myself for the coming weeks.
>
> Thanks.
>
> --
> tejun

[-- Attachment #2: trace.log --]
[-- Type: text/x-log, Size: 39107 bytes --]

# cat /sys/kernel/debug/tracing/trace

# tracer: nop
#
# entries-in-buffer/entries-written: 146/146   #P:72
#
#                              _-----=> irqs-off
#                             / _----=> need-resched
#                            | / _---=> hardirq/softirq
#                            || / _--=> preempt-depth
#                            ||| /     delay
#           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
#              | |       |   ||||       |         |
           <...>-116776 [051] .... 1319815.661693: balance_dirty_pages: bdi 0:93: limit=5816293 setpoint=5101938 dirty=360 bdi_setpoint=0 bdi_dirty=32 dirty_ratelimit=28 task_ratelimit=0 dirtied=32 dirtied_pause=32 paused=0 pause=132 period=132 think=0 cgroup_ino=1
           <...>-116776 [051] .... 1319815.794191: balance_dirty_pages: bdi 0:93: limit=5816293 setpoint=5101930 dirty=392 bdi_setpoint=0 bdi_dirty=33 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=136 period=136 think=4 cgroup_ino=1
           <...>-116776 [051] .... 1319815.934184: balance_dirty_pages: bdi 0:93: limit=5852298 setpoint=5119867 dirty=393 bdi_setpoint=0 bdi_dirty=34 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=140 period=140 think=4 cgroup_ino=1
           <...>-116776 [051] .... 1319816.078232: balance_dirty_pages: bdi 0:93: limit=5852298 setpoint=5119867 dirty=394 bdi_setpoint=0 bdi_dirty=35 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=144 period=144 think=4 cgroup_ino=1
           <...>-116776 [051] .... 1319816.226474: balance_dirty_pages: bdi 0:93: limit=5852297 setpoint=5119843 dirty=293 bdi_setpoint=0 bdi_dirty=36 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=148 period=148 think=4 cgroup_ino=1
           <...>-116776 [051] .... 1319816.378232: balance_dirty_pages: bdi 0:93: limit=5852297 setpoint=5119848 dirty=293 bdi_setpoint=0 bdi_dirty=37 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=152 period=152 think=4 cgroup_ino=1
           <...>-116776 [051] .... 1319816.534228: balance_dirty_pages: bdi 0:93: limit=5852296 setpoint=5119848 dirty=294 bdi_setpoint=0 bdi_dirty=38 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=156 period=156 think=4 cgroup_ino=1
           <...>-116776 [051] .... 1319816.694184: balance_dirty_pages: bdi 0:93: limit=5852296 setpoint=5119844 dirty=294 bdi_setpoint=0 bdi_dirty=39 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=160 period=160 think=4 cgroup_ino=1
           <...>-116776 [052] .... 1319816.858296: balance_dirty_pages: bdi 0:93: limit=5852295 setpoint=5119843 dirty=294 bdi_setpoint=0 bdi_dirty=40 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=164 period=164 think=4 cgroup_ino=1
           <...>-116776 [052] .... 1319817.026233: balance_dirty_pages: bdi 0:93: limit=5852295 setpoint=5119846 dirty=295 bdi_setpoint=0 bdi_dirty=41 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=168 period=168 think=4 cgroup_ino=1
           <...>-116776 [052] .... 1319817.198188: balance_dirty_pages: bdi 0:93: limit=5852293 setpoint=5119830 dirty=298 bdi_setpoint=0 bdi_dirty=42 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=172 period=172 think=4 cgroup_ino=1
           <...>-116776 [052] .... 1319817.374240: balance_dirty_pages: bdi 0:93: limit=5852293 setpoint=5119711 dirty=299 bdi_setpoint=0 bdi_dirty=43 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=176 period=176 think=4 cgroup_ino=1
           <...>-116776 [052] .... 1319817.738230: balance_dirty_pages: bdi 0:93: limit=5852263 setpoint=5119325 dirty=300 bdi_setpoint=0 bdi_dirty=45 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=184 period=184 think=4 cgroup_ino=1
           <...>-116776 [052] .... 1319817.926248: balance_dirty_pages: bdi 0:93: limit=5852281 setpoint=5119852 dirty=300 bdi_setpoint=0 bdi_dirty=46 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=188 period=188 think=4 cgroup_ino=1
           <...>-116776 [052] .... 1319818.118243: balance_dirty_pages: bdi 0:93: limit=5852281 setpoint=5119848 dirty=300 bdi_setpoint=0 bdi_dirty=47 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=192 period=192 think=4 cgroup_ino=1
           <...>-116776 [052] .... 1319818.314241: balance_dirty_pages: bdi 0:93: limit=5852308 setpoint=5119876 dirty=317 bdi_setpoint=0 bdi_dirty=48 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=196 period=196 think=4 cgroup_ino=1
           <...>-116776 [052] .... 1319818.514238: balance_dirty_pages: bdi 0:93: limit=5852308 setpoint=5119876 dirty=318 bdi_setpoint=0 bdi_dirty=49 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [052] .... 1319818.718260: balance_dirty_pages: bdi 0:93: limit=5852311 setpoint=5119878 dirty=321 bdi_setpoint=0 bdi_dirty=50 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [052] .... 1319818.926244: balance_dirty_pages: bdi 0:93: limit=5852414 setpoint=5119969 dirty=322 bdi_setpoint=0 bdi_dirty=51 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=8 cgroup_ino=1
           <...>-116776 [052] .... 1319819.130239: balance_dirty_pages: bdi 0:93: limit=5852422 setpoint=5119976 dirty=328 bdi_setpoint=0 bdi_dirty=52 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [052] .... 1319819.334533: balance_dirty_pages: bdi 0:93: limit=5852436 setpoint=5119988 dirty=329 bdi_setpoint=0 bdi_dirty=53 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [052] .... 1319819.538237: balance_dirty_pages: bdi 0:93: limit=5852437 setpoint=5119989 dirty=330 bdi_setpoint=0 bdi_dirty=54 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [052] .... 1319819.746193: balance_dirty_pages: bdi 0:93: limit=5852470 setpoint=5120018 dirty=332 bdi_setpoint=0 bdi_dirty=55 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=8 cgroup_ino=1
           <...>-116776 [052] .... 1319819.950188: balance_dirty_pages: bdi 0:93: limit=5852476 setpoint=5120023 dirty=333 bdi_setpoint=0 bdi_dirty=56 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [052] .... 1319820.158244: balance_dirty_pages: bdi 0:93: limit=5852475 setpoint=5120006 dirty=338 bdi_setpoint=0 bdi_dirty=57 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=8 cgroup_ino=1
           <...>-116776 [052] .... 1319820.362250: balance_dirty_pages: bdi 0:93: limit=5852474 setpoint=5120001 dirty=339 bdi_setpoint=0 bdi_dirty=58 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [052] .... 1319820.566250: balance_dirty_pages: bdi 0:93: limit=5852472 setpoint=5119996 dirty=340 bdi_setpoint=0 bdi_dirty=59 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [052] .... 1319820.774188: balance_dirty_pages: bdi 0:93: limit=5852471 setpoint=5119996 dirty=309 bdi_setpoint=0 bdi_dirty=60 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=8 cgroup_ino=1
           <...>-116776 [052] .... 1319820.978257: balance_dirty_pages: bdi 0:93: limit=5852471 setpoint=5120014 dirty=308 bdi_setpoint=0 bdi_dirty=61 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [052] .... 1319821.182202: balance_dirty_pages: bdi 0:93: limit=5852497 setpoint=5120041 dirty=310 bdi_setpoint=0 bdi_dirty=62 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [052] .... 1319821.386244: balance_dirty_pages: bdi 0:93: limit=5852509 setpoint=5120052 dirty=311 bdi_setpoint=0 bdi_dirty=63 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [052] .... 1319821.590250: balance_dirty_pages: bdi 0:93: limit=5852505 setpoint=5119996 dirty=325 bdi_setpoint=0 bdi_dirty=64 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [053] .... 1319821.794256: balance_dirty_pages: bdi 0:93: limit=5852499 setpoint=5119964 dirty=326 bdi_setpoint=0 bdi_dirty=65 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [053] .... 1319821.998263: balance_dirty_pages: bdi 0:93: limit=5852492 setpoint=5119945 dirty=343 bdi_setpoint=0 bdi_dirty=66 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [053] .... 1319822.202269: balance_dirty_pages: bdi 0:93: limit=5852485 setpoint=5119943 dirty=336 bdi_setpoint=0 bdi_dirty=67 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [053] .... 1319822.406198: balance_dirty_pages: bdi 0:93: limit=5852477 setpoint=5119920 dirty=337 bdi_setpoint=0 bdi_dirty=68 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [053] .... 1319822.610243: balance_dirty_pages: bdi 0:93: limit=5852469 setpoint=5119921 dirty=338 bdi_setpoint=0 bdi_dirty=69 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [053] .... 1319822.814469: balance_dirty_pages: bdi 0:93: limit=5852462 setpoint=5119918 dirty=338 bdi_setpoint=0 bdi_dirty=70 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [053] .... 1319823.018251: balance_dirty_pages: bdi 0:93: limit=5852433 setpoint=5119648 dirty=338 bdi_setpoint=0 bdi_dirty=71 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [053] .... 1319823.222198: balance_dirty_pages: bdi 0:93: limit=5852389 setpoint=5119428 dirty=343 bdi_setpoint=0 bdi_dirty=72 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [053] .... 1319823.426198: balance_dirty_pages: bdi 0:93: limit=5852329 setpoint=5119194 dirty=344 bdi_setpoint=0 bdi_dirty=73 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [053] .... 1319823.630250: balance_dirty_pages: bdi 0:93: limit=5852309 setpoint=5119637 dirty=348 bdi_setpoint=0 bdi_dirty=74 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [053] .... 1319823.834279: balance_dirty_pages: bdi 0:93: limit=5852290 setpoint=5119628 dirty=349 bdi_setpoint=0 bdi_dirty=75 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [053] .... 1319824.038214: balance_dirty_pages: bdi 0:93: limit=5852270 setpoint=5119608 dirty=351 bdi_setpoint=0 bdi_dirty=76 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [053] .... 1319824.242266: balance_dirty_pages: bdi 0:93: limit=5852253 setpoint=5119629 dirty=370 bdi_setpoint=0 bdi_dirty=77 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [053] .... 1319824.446258: balance_dirty_pages: bdi 0:93: limit=5852237 setpoint=5119621 dirty=371 bdi_setpoint=0 bdi_dirty=78 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [053] .... 1319824.650227: balance_dirty_pages: bdi 0:93: limit=5852222 setpoint=5119621 dirty=347 bdi_setpoint=0 bdi_dirty=79 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [053] .... 1319824.854220: balance_dirty_pages: bdi 0:93: limit=5852208 setpoint=5119614 dirty=348 bdi_setpoint=0 bdi_dirty=80 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [053] .... 1319825.058272: balance_dirty_pages: bdi 0:93: limit=5852170 setpoint=5119305 dirty=351 bdi_setpoint=0 bdi_dirty=81 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [053] .... 1319825.262266: balance_dirty_pages: bdi 0:93: limit=5852157 setpoint=5119590 dirty=392 bdi_setpoint=0 bdi_dirty=82 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [053] .... 1319825.470267: balance_dirty_pages: bdi 0:93: limit=5852145 setpoint=5119593 dirty=392 bdi_setpoint=0 bdi_dirty=83 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=8 cgroup_ino=1
           <...>-116776 [053] .... 1319825.674273: balance_dirty_pages: bdi 0:93: limit=5852137 setpoint=5119623 dirty=394 bdi_setpoint=0 bdi_dirty=84 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [053] .... 1319825.878275: balance_dirty_pages: bdi 0:93: limit=5852130 setpoint=5119631 dirty=394 bdi_setpoint=0 bdi_dirty=85 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [053] .... 1319826.082216: balance_dirty_pages: bdi 0:93: limit=5852123 setpoint=5119627 dirty=395 bdi_setpoint=0 bdi_dirty=86 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [053] .... 1319826.286534: balance_dirty_pages: bdi 0:93: limit=5852116 setpoint=5119621 dirty=269 bdi_setpoint=0 bdi_dirty=87 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [053] .... 1319826.490264: balance_dirty_pages: bdi 0:93: limit=5852109 setpoint=5119619 dirty=270 bdi_setpoint=0 bdi_dirty=88 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [053] .... 1319826.698198: balance_dirty_pages: bdi 0:93: limit=5852103 setpoint=5119626 dirty=271 bdi_setpoint=0 bdi_dirty=89 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=4 pause=200 period=200 think=8 cgroup_ino=1
           <...>-116776 [053] .... 1319826.902273: balance_dirty_pages: bdi 0:93: limit=5852098 setpoint=5119629 dirty=272 bdi_setpoint=0 bdi_dirty=90 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=8 cgroup_ino=1
           <...>-116776 [053] .... 1319827.106217: balance_dirty_pages: bdi 0:93: limit=5852093 setpoint=5119621 dirty=273 bdi_setpoint=0 bdi_dirty=91 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [053] .... 1319827.310215: balance_dirty_pages: bdi 0:93: limit=5852088 setpoint=5119620 dirty=305 bdi_setpoint=0 bdi_dirty=92 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [053] .... 1319827.514216: balance_dirty_pages: bdi 0:93: limit=5852084 setpoint=5119627 dirty=306 bdi_setpoint=0 bdi_dirty=93 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [053] .... 1319827.718259: balance_dirty_pages: bdi 0:93: limit=5852082 setpoint=5119647 dirty=307 bdi_setpoint=0 bdi_dirty=94 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [053] .... 1319827.922276: balance_dirty_pages: bdi 0:93: limit=5852077 setpoint=5119612 dirty=308 bdi_setpoint=0 bdi_dirty=95 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [053] .... 1319828.126273: balance_dirty_pages: bdi 0:93: limit=5852074 setpoint=5119625 dirty=311 bdi_setpoint=0 bdi_dirty=96 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [053] .... 1319828.330268: balance_dirty_pages: bdi 0:93: limit=5852071 setpoint=5119623 dirty=312 bdi_setpoint=0 bdi_dirty=97 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [053] .... 1319828.534223: balance_dirty_pages: bdi 0:93: limit=5852046 setpoint=5119348 dirty=313 bdi_setpoint=0 bdi_dirty=98 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [053] .... 1319828.738266: balance_dirty_pages: bdi 0:93: limit=5852007 setpoint=5119148 dirty=314 bdi_setpoint=0 bdi_dirty=99 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [053] .... 1319828.942268: balance_dirty_pages: bdi 0:93: limit=5851955 setpoint=5118960 dirty=314 bdi_setpoint=0 bdi_dirty=100 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [053] .... 1319829.146274: balance_dirty_pages: bdi 0:93: limit=5851954 setpoint=5119544 dirty=315 bdi_setpoint=0 bdi_dirty=101 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [000] .... 1319829.350279: balance_dirty_pages: bdi 0:93: limit=5851954 setpoint=5119561 dirty=355 bdi_setpoint=0 bdi_dirty=102 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [000] .... 1319829.554207: balance_dirty_pages: bdi 0:93: limit=5851954 setpoint=5119561 dirty=356 bdi_setpoint=0 bdi_dirty=103 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [000] .... 1319829.758504: balance_dirty_pages: bdi 0:93: limit=5851954 setpoint=5119566 dirty=356 bdi_setpoint=0 bdi_dirty=104 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [000] .... 1319829.962217: balance_dirty_pages: bdi 0:93: limit=5851981 setpoint=5119590 dirty=356 bdi_setpoint=0 bdi_dirty=105 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [000] .... 1319830.166273: balance_dirty_pages: bdi 0:93: limit=5852035 setpoint=5119637 dirty=359 bdi_setpoint=0 bdi_dirty=106 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [000] .... 1319830.370469: balance_dirty_pages: bdi 0:93: limit=5852035 setpoint=5119637 dirty=359 bdi_setpoint=0 bdi_dirty=107 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [000] .... 1319830.574526: balance_dirty_pages: bdi 0:93: limit=5852074 setpoint=5119671 dirty=359 bdi_setpoint=0 bdi_dirty=108 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [000] .... 1319830.778447: balance_dirty_pages: bdi 0:93: limit=5852089 setpoint=5119684 dirty=359 bdi_setpoint=0 bdi_dirty=109 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [000] .... 1319830.982278: balance_dirty_pages: bdi 0:93: limit=5852089 setpoint=5119684 dirty=362 bdi_setpoint=0 bdi_dirty=110 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [000] .... 1319831.186310: balance_dirty_pages: bdi 0:93: limit=5852158 setpoint=5119745 dirty=380 bdi_setpoint=0 bdi_dirty=111 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [000] .... 1319831.390219: balance_dirty_pages: bdi 0:93: limit=5852347 setpoint=5119910 dirty=381 bdi_setpoint=0 bdi_dirty=112 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [000] .... 1319831.594223: balance_dirty_pages: bdi 0:93: limit=5852498 setpoint=5120042 dirty=383 bdi_setpoint=0 bdi_dirty=113 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [000] .... 1319831.798290: balance_dirty_pages: bdi 0:93: limit=5852497 setpoint=5120023 dirty=388 bdi_setpoint=0 bdi_dirty=114 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [001] .... 1319832.002236: balance_dirty_pages: bdi 0:93: limit=5852501 setpoint=5120045 dirty=365 bdi_setpoint=0 bdi_dirty=115 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [001] .... 1319832.206230: balance_dirty_pages: bdi 0:93: limit=5852499 setpoint=5120011 dirty=380 bdi_setpoint=0 bdi_dirty=116 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [001] .... 1319832.410216: balance_dirty_pages: bdi 0:93: limit=5852497 setpoint=5120010 dirty=404 bdi_setpoint=0 bdi_dirty=117 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [002] .... 1319832.614284: balance_dirty_pages: bdi 0:93: limit=5852452 setpoint=5119476 dirty=416 bdi_setpoint=0 bdi_dirty=118 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [002] .... 1319832.818278: balance_dirty_pages: bdi 0:93: limit=5852278 setpoint=5117818 dirty=418 bdi_setpoint=0 bdi_dirty=119 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [003] .... 1319833.022283: balance_dirty_pages: bdi 0:93: limit=5852040 setpoint=5116867 dirty=418 bdi_setpoint=0 bdi_dirty=120 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [003] .... 1319833.226657: balance_dirty_pages: bdi 0:93: limit=5851798 setpoint=5116616 dirty=444 bdi_setpoint=0 bdi_dirty=121 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [003] .... 1319833.434239: balance_dirty_pages: bdi 0:93: limit=5851565 setpoint=5116516 dirty=454 bdi_setpoint=0 bdi_dirty=122 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=8 cgroup_ino=1
           <...>-116776 [003] .... 1319833.638233: balance_dirty_pages: bdi 0:93: limit=5851350 setpoint=5116536 dirty=454 bdi_setpoint=0 bdi_dirty=123 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [003] .... 1319833.842238: balance_dirty_pages: bdi 0:93: limit=5851156 setpoint=5116611 dirty=454 bdi_setpoint=0 bdi_dirty=124 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [003] .... 1319834.046245: balance_dirty_pages: bdi 0:93: limit=5850982 setpoint=5116691 dirty=454 bdi_setpoint=0 bdi_dirty=125 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [003] .... 1319834.250225: balance_dirty_pages: bdi 0:93: limit=5850783 setpoint=5116223 dirty=460 bdi_setpoint=0 bdi_dirty=126 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [003] .... 1319834.454236: balance_dirty_pages: bdi 0:93: limit=5850577 setpoint=5115961 dirty=461 bdi_setpoint=0 bdi_dirty=127 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [003] .... 1319834.658278: balance_dirty_pages: bdi 0:93: limit=5850428 setpoint=5116491 dirty=463 bdi_setpoint=0 bdi_dirty=128 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319834.862242: balance_dirty_pages: bdi 0:93: limit=5850299 setpoint=5116619 dirty=464 bdi_setpoint=0 bdi_dirty=129 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319835.066272: balance_dirty_pages: bdi 0:93: limit=5850173 setpoint=5116533 dirty=1622 bdi_setpoint=0 bdi_dirty=130 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319835.270276: balance_dirty_pages: bdi 0:93: limit=5849999 setpoint=5115825 dirty=3718 bdi_setpoint=0 bdi_dirty=131 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319835.474293: balance_dirty_pages: bdi 0:93: limit=5849785 setpoint=5115171 dirty=5609 bdi_setpoint=0 bdi_dirty=132 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319835.678248: balance_dirty_pages: bdi 0:93: limit=5849577 setpoint=5115067 dirty=6870 bdi_setpoint=0 bdi_dirty=133 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319835.882287: balance_dirty_pages: bdi 0:93: limit=5849375 setpoint=5114957 dirty=12919 bdi_setpoint=0 bdi_dirty=134 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319836.086286: balance_dirty_pages: bdi 0:93: limit=5849177 setpoint=5114825 dirty=595 bdi_setpoint=0 bdi_dirty=135 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319836.290245: balance_dirty_pages: bdi 0:93: limit=5852314 setpoint=5119881 dirty=494 bdi_setpoint=0 bdi_dirty=136 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319836.494365: balance_dirty_pages: bdi 0:93: limit=5852314 setpoint=5119876 dirty=496 bdi_setpoint=0 bdi_dirty=137 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319836.702554: balance_dirty_pages: bdi 0:93: limit=5852331 setpoint=5119896 dirty=499 bdi_setpoint=0 bdi_dirty=138 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=8 cgroup_ino=1
           <...>-116776 [004] .... 1319836.910236: balance_dirty_pages: bdi 0:93: limit=5852408 setpoint=5119963 dirty=500 bdi_setpoint=0 bdi_dirty=139 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=8 cgroup_ino=1
           <...>-116776 [004] .... 1319837.118239: balance_dirty_pages: bdi 0:93: limit=5852427 setpoint=5119980 dirty=501 bdi_setpoint=0 bdi_dirty=140 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=4 pause=200 period=200 think=8 cgroup_ino=1
           <...>-116776 [004] .... 1319837.322298: balance_dirty_pages: bdi 0:93: limit=5852427 setpoint=5119975 dirty=501 bdi_setpoint=0 bdi_dirty=141 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=8 cgroup_ino=1
           <...>-116776 [004] .... 1319837.526295: balance_dirty_pages: bdi 0:93: limit=5852427 setpoint=5119974 dirty=501 bdi_setpoint=0 bdi_dirty=142 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319837.730233: balance_dirty_pages: bdi 0:93: limit=5852427 setpoint=5119970 dirty=501 bdi_setpoint=0 bdi_dirty=143 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319837.938235: balance_dirty_pages: bdi 0:93: limit=5852426 setpoint=5119956 dirty=501 bdi_setpoint=0 bdi_dirty=144 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=4 pause=200 period=200 think=8 cgroup_ino=1
           <...>-116776 [004] .... 1319838.142294: balance_dirty_pages: bdi 0:93: limit=5852425 setpoint=5119965 dirty=540 bdi_setpoint=0 bdi_dirty=145 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=8 cgroup_ino=1
           <...>-116776 [004] .... 1319838.346301: balance_dirty_pages: bdi 0:93: limit=5852425 setpoint=5119970 dirty=541 bdi_setpoint=0 bdi_dirty=146 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319838.550317: balance_dirty_pages: bdi 0:93: limit=5852429 setpoint=5119982 dirty=544 bdi_setpoint=0 bdi_dirty=147 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319838.754296: balance_dirty_pages: bdi 0:93: limit=5852429 setpoint=5119982 dirty=544 bdi_setpoint=0 bdi_dirty=148 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319838.958245: balance_dirty_pages: bdi 0:93: limit=5852429 setpoint=5119982 dirty=545 bdi_setpoint=0 bdi_dirty=149 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319839.162307: balance_dirty_pages: bdi 0:93: limit=5852432 setpoint=5119984 dirty=546 bdi_setpoint=0 bdi_dirty=150 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319839.366304: balance_dirty_pages: bdi 0:93: limit=5852454 setpoint=5120004 dirty=546 bdi_setpoint=0 bdi_dirty=151 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319839.570301: balance_dirty_pages: bdi 0:93: limit=5852454 setpoint=5120003 dirty=546 bdi_setpoint=0 bdi_dirty=152 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319839.774289: balance_dirty_pages: bdi 0:93: limit=5852424 setpoint=5119626 dirty=546 bdi_setpoint=0 bdi_dirty=153 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319839.978241: balance_dirty_pages: bdi 0:93: limit=5852381 setpoint=5119434 dirty=546 bdi_setpoint=0 bdi_dirty=154 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319840.182540: balance_dirty_pages: bdi 0:93: limit=5852392 setpoint=5119949 dirty=560 bdi_setpoint=0 bdi_dirty=155 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319840.386258: balance_dirty_pages: bdi 0:93: limit=5852407 setpoint=5119962 dirty=561 bdi_setpoint=0 bdi_dirty=156 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319840.590330: balance_dirty_pages: bdi 0:93: limit=5852403 setpoint=5119905 dirty=562 bdi_setpoint=0 bdi_dirty=157 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319840.794311: balance_dirty_pages: bdi 0:93: limit=5852397 setpoint=5119880 dirty=563 bdi_setpoint=0 bdi_dirty=158 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319840.998313: balance_dirty_pages: bdi 0:93: limit=5852392 setpoint=5119888 dirty=564 bdi_setpoint=0 bdi_dirty=159 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319841.202320: balance_dirty_pages: bdi 0:93: limit=5852385 setpoint=5119858 dirty=493 bdi_setpoint=0 bdi_dirty=160 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319841.406251: balance_dirty_pages: bdi 0:93: limit=5852379 setpoint=5119860 dirty=493 bdi_setpoint=0 bdi_dirty=161 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319841.610292: balance_dirty_pages: bdi 0:93: limit=5852373 setpoint=5119857 dirty=493 bdi_setpoint=0 bdi_dirty=162 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319841.814263: balance_dirty_pages: bdi 0:93: limit=5852367 setpoint=5119854 dirty=493 bdi_setpoint=0 bdi_dirty=163 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319842.018298: balance_dirty_pages: bdi 0:93: limit=5852361 setpoint=5119851 dirty=493 bdi_setpoint=0 bdi_dirty=164 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319842.222254: balance_dirty_pages: bdi 0:93: limit=5852356 setpoint=5119853 dirty=512 bdi_setpoint=0 bdi_dirty=165 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319842.426267: balance_dirty_pages: bdi 0:93: limit=5852352 setpoint=5119859 dirty=513 bdi_setpoint=0 bdi_dirty=166 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319842.630318: balance_dirty_pages: bdi 0:93: limit=5852340 setpoint=5119761 dirty=514 bdi_setpoint=0 bdi_dirty=167 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319842.834333: balance_dirty_pages: bdi 0:93: limit=5852325 setpoint=5119709 dirty=515 bdi_setpoint=0 bdi_dirty=168 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319843.038314: balance_dirty_pages: bdi 0:93: limit=5852311 setpoint=5119709 dirty=517 bdi_setpoint=0 bdi_dirty=169 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319843.242314: balance_dirty_pages: bdi 0:93: limit=5852298 setpoint=5119708 dirty=523 bdi_setpoint=0 bdi_dirty=170 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319843.446267: balance_dirty_pages: bdi 0:93: limit=5852285 setpoint=5119701 dirty=523 bdi_setpoint=0 bdi_dirty=171 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319843.650585: balance_dirty_pages: bdi 0:93: limit=5852270 setpoint=5119666 dirty=524 bdi_setpoint=0 bdi_dirty=172 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [004] .... 1319843.854320: balance_dirty_pages: bdi 0:93: limit=5852257 setpoint=5119679 dirty=524 bdi_setpoint=0 bdi_dirty=173 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [005] .... 1319844.058335: balance_dirty_pages: bdi 0:93: limit=5852245 setpoint=5119678 dirty=524 bdi_setpoint=0 bdi_dirty=174 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [006] .... 1319844.266325: balance_dirty_pages: bdi 0:93: limit=5852234 setpoint=5119683 dirty=539 bdi_setpoint=0 bdi_dirty=175 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=8 cgroup_ino=1
           <...>-116776 [006] .... 1319844.470315: balance_dirty_pages: bdi 0:93: limit=5852219 setpoint=5119624 dirty=540 bdi_setpoint=0 bdi_dirty=176 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [007] .... 1319844.674278: balance_dirty_pages: bdi 0:93: limit=5852195 setpoint=5119494 dirty=546 bdi_setpoint=0 bdi_dirty=177 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1
           <...>-116776 [007] .... 1319844.878326: balance_dirty_pages: bdi 0:93: limit=5852172 setpoint=5119486 dirty=549 bdi_setpoint=0 bdi_dirty=178 dirty_ratelimit=28 task_ratelimit=0 dirtied=1 dirtied_pause=0 paused=0 pause=200 period=200 think=4 cgroup_ino=1

[-- Attachment #3: fuse-debug.log --]
[-- Type: text/x-log, Size: 4902 bytes --]

15:19:01.437953 LOOKUP /tmp
15:19:01.437993 getattr /tmp
15:19:01.438021    NODEID: 2
15:19:01.438046    unique: 68906, success, outsize: 144
15:19:01.438078 unique: 68908, opcode: GETXATTR (22), nodeid: 2, insize: 68, pid: 116776
15:19:01.438104 getxattr /tmp security.capability 24
15:19:01.438129    unique: 68908, error: -61 (No data available), outsize: 16
15:19:01.438159 unique: 68910, opcode: LOOKUP (1), nodeid: 2, insize: 53, pid: 116776
15:19:01.438189 LOOKUP /tmp/stapelberg.1
15:19:01.438231 getattr /tmp/stapelberg.1
15:19:01.438261    NODEID: 3
15:19:01.438286    unique: 68910, success, outsize: 144
15:19:01.438319 unique: 68912, opcode: GETXATTR (22), nodeid: 3, insize: 68, pid: 116776
15:19:01.438348 getxattr /tmp/stapelberg.1 security.capability 24
15:19:01.438377    unique: 68912, error: -61 (No data available), outsize: 16
15:19:01.438406 unique: 68914, opcode: GETXATTR (22), nodeid: 3, insize: 68, pid: 116776
15:19:01.438436 getxattr /tmp/stapelberg.1 security.capability 24
15:19:01.438461    unique: 68914, error: -61 (No data available), outsize: 16
15:19:01.438493 unique: 68916, opcode: OPEN (14), nodeid: 3, insize: 48, pid: 116776
15:19:01.438522 open flags: 0x8002 /tmp/stapelberg.1
15:19:01.438553    open[4] flags: 0x8002 /tmp/stapelberg.1
15:19:01.438578    unique: 68916, success, outsize: 32
15:19:01.456734 unique: 68918, opcode: GETXATTR (22), nodeid: 3, insize: 68, pid: 116776
15:19:01.456825 getxattr /tmp/stapelberg.1 security.capability 0
15:19:01.456858    unique: 68918, error: -61 (No data available), outsize: 16
15:19:01.456890 unique: 68920, opcode: SETATTR (4), nodeid: 3, insize: 128, pid: 116776
15:19:01.456917 truncate /tmp/stapelberg.1 0
15:19:01.474601 getattr /tmp/stapelberg.1
15:19:01.474703    unique: 68920, success, outsize: 120
15:19:01.474745 unique: 68922, opcode: GETXATTR (22), nodeid: 3, insize: 68, pid: 116776
15:19:01.474773 getxattr /tmp/stapelberg.1 security.capability 0
15:19:01.474798    unique: 68922, error: -61 (No data available), outsize: 16
15:19:01.474826 unique: 68924, opcode: WRITE (16), nodeid: 3, insize: 81, pid: 116776
15:19:01.474858 write[4] 1 bytes to 103809023 flags: 0x8002
15:19:01.474891    write[4] 1 bytes to 103809023
15:19:01.474917    unique: 68924, success, outsize: 24
15:19:01.481195 unique: 68926, opcode: READ (15), nodeid: 3, insize: 80, pid: 116776
15:19:01.481283 read[4] 4096 bytes from 103804928 flags: 0x8002
15:19:01.481317    read[4] 4096 bytes from 103804928
15:19:01.481343    unique: 68926, success, outsize: 4112
15:19:01.481373 unique: 68928, opcode: READ (15), nodeid: 3, insize: 80, pid: 116776
15:19:01.481404 read[4] 69632 bytes from 103735296 flags: 0x8002
15:19:01.481433    read[4] 69632 bytes from 103735296
15:19:01.481464    unique: 68928, success, outsize: 69648
15:19:01.481595 unique: 68930, opcode: READ (15), nodeid: 3, insize: 80, pid: 116776
15:19:01.481634 read[4] 69632 bytes from 103665664 flags: 0x8002
15:19:01.481661    read[4] 69632 bytes from 103665664
15:19:01.481690    unique: 68930, success, outsize: 69648
15:19:02.046413 unique: 68932, opcode: READ (15), nodeid: 3, insize: 80, pid: 116776
15:19:02.046546 read[4] 69632 bytes from 103596032 flags: 0x8002
15:19:02.046579    read[4] 69632 bytes from 103596032
15:19:02.046605    unique: 68932, success, outsize: 69648
15:19:05.154455 unique: 68934, opcode: READ (15), nodeid: 3, insize: 80, pid: 116776
15:19:05.154597 read[4] 69632 bytes from 103526400 flags: 0x8002
15:19:05.154633    read[4] 69632 bytes from 103526400
15:19:05.154659    unique: 68934, success, outsize: 69648
15:19:08.634370 unique: 68936, opcode: READ (15), nodeid: 3, insize: 80, pid: 116776
15:19:08.634502 read[4] 69632 bytes from 103456768 flags: 0x8002
15:19:08.634535    read[4] 69632 bytes from 103456768
15:19:08.634561    unique: 68936, success, outsize: 69648
15:19:12.106478 unique: 68938, opcode: READ (15), nodeid: 3, insize: 80, pid: 116776
15:19:12.106615 read[4] 69632 bytes from 103387136 flags: 0x8002
15:19:12.106653    read[4] 69632 bytes from 103387136
15:19:12.106689    unique: 68938, success, outsize: 69648
15:19:15.578461 unique: 68940, opcode: READ (15), nodeid: 3, insize: 80, pid: 116776
15:19:15.578596 read[4] 69632 bytes from 103317504 flags: 0x8002
15:19:15.578641    read[4] 69632 bytes from 103317504
15:19:15.578679    unique: 68940, success, outsize: 69648
15:19:19.046492 unique: 68942, opcode: READ (15), nodeid: 3, insize: 80, pid: 116776
15:19:19.046674 read[4] 69632 bytes from 103247872 flags: 0x8002
15:19:19.046727    read[4] 69632 bytes from 103247872
15:19:19.046765    unique: 68942, success, outsize: 69648
15:19:22.522429 unique: 68944, opcode: READ (15), nodeid: 3, insize: 80, pid: 116776
15:19:22.522566 read[4] 69632 bytes from 103178240 flags: 0x8002
15:19:22.522610    read[4] 69632 bytes from 103178240
15:19:22.522643    unique: 68944, success, outsize: 69648
[…]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [fuse-devel] Writing to FUSE via mmap extremely slow (sometimes) on some machines?
  2020-03-03 14:21                 ` Michael Stapelberg
@ 2020-03-03 14:25                   ` Tejun Heo
       [not found]                     ` <CANnVG6=yf82CcwmdmawmjTP2CskD-WhcvkLnkZs7hs0OG7KcTg@mail.gmail.com>
  0 siblings, 1 reply; 11+ messages in thread
From: Tejun Heo @ 2020-03-03 14:25 UTC (permalink / raw)
  To: Michael Stapelberg
  Cc: Miklos Szeredi, Jack Smith, fuse-devel, linux-fsdevel, linux-mm

On Tue, Mar 03, 2020 at 03:21:47PM +0100, Michael Stapelberg wrote:
> Find attached trace.log (cat /sys/kernel/debug/tracing/trace) and
> fuse-debug.log (FUSE daemon with timestamps).
> 
> Does that tell you something, or do we need more data? (If so, how?)

This is likely the culprit.

 .... 1319822.406198: balance_dirty_pages: ... bdi_dirty=68 dirty_ratelimit=28 ...

For whatever reason, bdp calculated that the dirty throttling
threshold for the fuse device is 28 pages which is extremely low. Need
to track down how that number came to be. I'm afraid from here on it'd
mostly be reading source code and sprinkling printks around but the
debugging really comes down to figuring out how we ended up with 68
and 28.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [fuse-devel] Writing to FUSE via mmap extremely slow (sometimes) on some machines?
       [not found]                     ` <CANnVG6=yf82CcwmdmawmjTP2CskD-WhcvkLnkZs7hs0OG7KcTg@mail.gmail.com>
@ 2020-03-09 14:32                       ` Michael Stapelberg
  2020-03-09 14:36                         ` Miklos Szeredi
  0 siblings, 1 reply; 11+ messages in thread
From: Michael Stapelberg @ 2020-03-09 14:32 UTC (permalink / raw)
  To: Tejun Heo; +Cc: Miklos Szeredi, Jack Smith, fuse-devel, linux-fsdevel, linux-mm

Here’s one more thing I noticed: when polling
/sys/kernel/debug/bdi/0:93/stats, I see that BdiDirtied and BdiWritten
remain at their original values while the kernel sends FUSE read
requests, and only goes up when the kernel transitions into sending
FUSE write requests. Notably, the page dirtying throttling happens in
the read phase, which is most likely why the write bandwidth is
(correctly) measured as 0.

Do we have any ideas on why the kernel sends FUSE reads at all?

On Thu, Mar 5, 2020 at 3:45 PM Michael Stapelberg
<michael+lkml@stapelberg.ch> wrote:
>
> Thanks for taking a look!
>
> Find attached a trace file which illustrates that the device’s write
> bandwidth (write_bw) decreases from the initial 100 MB/s down to,
> eventually, 0 (not included in the trace). When seeing the
> pathologically slow write-back performance, I observed write_bw=0!
>
> The trace was generated with these tracepoints enabled:
> echo 1 > /sys/kernel/debug/tracing/events/writeback/balance_dirty_pages/enable
> echo 1 > /sys/kernel/debug/tracing/events/writeback/bdi_dirty_ratelimit/enable
>
> I wonder why the measured write bandwidth decreases so much. Any thoughts?
>
> On Tue, Mar 3, 2020 at 3:25 PM Tejun Heo <tj@kernel.org> wrote:
> >
> > On Tue, Mar 03, 2020 at 03:21:47PM +0100, Michael Stapelberg wrote:
> > > Find attached trace.log (cat /sys/kernel/debug/tracing/trace) and
> > > fuse-debug.log (FUSE daemon with timestamps).
> > >
> > > Does that tell you something, or do we need more data? (If so, how?)
> >
> > This is likely the culprit.
> >
> >  .... 1319822.406198: balance_dirty_pages: ... bdi_dirty=68 dirty_ratelimit=28 ...
> >
> > For whatever reason, bdp calculated that the dirty throttling
> > threshold for the fuse device is 28 pages which is extremely low. Need
> > to track down how that number came to be. I'm afraid from here on it'd
> > mostly be reading source code and sprinkling printks around but the
> > debugging really comes down to figuring out how we ended up with 68
> > and 28.
> >
> > Thanks.
> >
> > --
> > tejun

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [fuse-devel] Writing to FUSE via mmap extremely slow (sometimes) on some machines?
  2020-03-09 14:32                       ` Michael Stapelberg
@ 2020-03-09 14:36                         ` Miklos Szeredi
  2020-03-09 15:11                           ` Michael Stapelberg
  0 siblings, 1 reply; 11+ messages in thread
From: Miklos Szeredi @ 2020-03-09 14:36 UTC (permalink / raw)
  To: Michael Stapelberg
  Cc: Tejun Heo, Jack Smith, fuse-devel, linux-fsdevel, linux-mm

On Mon, Mar 9, 2020 at 3:32 PM Michael Stapelberg
<michael+lkml@stapelberg.ch> wrote:
>
> Here’s one more thing I noticed: when polling
> /sys/kernel/debug/bdi/0:93/stats, I see that BdiDirtied and BdiWritten
> remain at their original values while the kernel sends FUSE read
> requests, and only goes up when the kernel transitions into sending
> FUSE write requests. Notably, the page dirtying throttling happens in
> the read phase, which is most likely why the write bandwidth is
> (correctly) measured as 0.
>
> Do we have any ideas on why the kernel sends FUSE reads at all?

Memory writes (stores) need the memory page to be up-to-date wrt. the
backing file before proceeding.   This means that if the page hasn't
yet been cached by the kernel, it needs to be read first.

Thanks,
Miklos

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [fuse-devel] Writing to FUSE via mmap extremely slow (sometimes) on some machines?
  2020-03-09 14:36                         ` Miklos Szeredi
@ 2020-03-09 15:11                           ` Michael Stapelberg
  2020-03-12 15:45                             ` Michael Stapelberg
  0 siblings, 1 reply; 11+ messages in thread
From: Michael Stapelberg @ 2020-03-09 15:11 UTC (permalink / raw)
  To: Miklos Szeredi; +Cc: Tejun Heo, Jack Smith, fuse-devel, linux-fsdevel, linux-mm

[-- Attachment #1: Type: text/plain, Size: 2541 bytes --]

Thanks for clarifying. I have modified the mmap test program (see
attached) to optionally read in the entire file when the WORKAROUND=
environment variable is set, thereby preventing the FUSE reads in the
write phase. I can now see a batch of reads, followed by a batch of
writes.

What’s interesting: when polling using “while :; do grep ^Bdi
/sys/kernel/debug/bdi/0:93/stats; sleep 0.1; done” and running the
mmap test program, I see:

BdiDirtied:            3566304 kB
BdiWritten:            3563616 kB
BdiWriteBandwidth:       13596 kBps

BdiDirtied:            3566304 kB
BdiWritten:            3563616 kB
BdiWriteBandwidth:       13596 kBps

BdiDirtied:            3566528 kB (+224 kB) <-- starting to dirty pages
BdiWritten:            3564064 kB (+448 kB) <-- starting to write
BdiWriteBandwidth:       10700 kBps <-- only bandwidth update!

BdiDirtied:            3668224 kB (+ 101696 kB) <-- all pages dirtied
BdiWritten:            3565632 kB (+1568 kB)
BdiWriteBandwidth:       10700 kBps

BdiDirtied:            3668224 kB
BdiWritten:            3665536 kB (+ 99904 kB) <-- all pages written
BdiWriteBandwidth:       10700 kBps

BdiDirtied:            3668224 kB
BdiWritten:            3665536 kB
BdiWriteBandwidth:       10700 kBps

This seems to suggest that the bandwidth measurements only capture the
rising slope of the transfer, but not the bulk of the transfer itself,
resulting in inaccurate measurements. This effect is worsened when the
test program doesn’t pre-read the output file and hence the kernel
gets fewer FUSE write requests out.

On Mon, Mar 9, 2020 at 3:36 PM Miklos Szeredi <miklos@szeredi.hu> wrote:
>
> On Mon, Mar 9, 2020 at 3:32 PM Michael Stapelberg
> <michael+lkml@stapelberg.ch> wrote:
> >
> > Here’s one more thing I noticed: when polling
> > /sys/kernel/debug/bdi/0:93/stats, I see that BdiDirtied and BdiWritten
> > remain at their original values while the kernel sends FUSE read
> > requests, and only goes up when the kernel transitions into sending
> > FUSE write requests. Notably, the page dirtying throttling happens in
> > the read phase, which is most likely why the write bandwidth is
> > (correctly) measured as 0.
> >
> > Do we have any ideas on why the kernel sends FUSE reads at all?
>
> Memory writes (stores) need the memory page to be up-to-date wrt. the
> backing file before proceeding.   This means that if the page hasn't
> yet been cached by the kernel, it needs to be read first.
>
> Thanks,
> Miklos

[-- Attachment #2: mmap.c --]
[-- Type: text/x-csrc, Size: 2495 bytes --]

#include <sys/types.h>
#include <sys/stat.h>
#include <sys/mman.h> 
#include <fcntl.h>
#include <string.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <stdint.h>

/*
 * An implementation of copy ("cp") that uses memory maps.  Various
 * error checking has been removed to promote readability
 */

// Where we want the source file's memory map to live in virtual memory
// The destination file resides immediately after the source file
#define MAP_LOCATION 0x6100

int main (int argc, char *argv[]) {
 int fdin, fdout;
 char *src, *dst;
 struct stat statbuf;
 off_t fileSize = 0;

 if (argc != 3) {
   printf ("usage: a.out <fromfile> <tofile>\n");
   exit(0);
 }

 /* open the input file */
 if ((fdin = open (argv[1], O_RDONLY)) < 0) {
   printf ("can't open %s for reading\n", argv[1]);
   exit(0);
 }

 /* open/create the output file */
 if ((fdout = open (argv[2], O_RDWR | O_CREAT | O_TRUNC, 0600)) < 0) {
   printf ("can't create %s for writing\n", argv[2]);
   exit(0);
 }
 
 /* find size of input file */
 fstat (fdin,&statbuf) ;
 fileSize = statbuf.st_size;
 
 /* go to the location corresponding to the last byte */
 if (lseek (fdout, fileSize - 1, SEEK_SET) == -1) {
   printf ("lseek error\n");
   exit(0);
 }
 
 /* write a dummy byte at the last location */
 write (fdout, "", 1);
 
 /* 
  * memory map the input file.  Only the first two arguments are
  * interesting: 1) the location and 2) the size of the memory map 
  * in virtual memory space. Note that the location is only a "hint";
  * the OS can choose to return a different virtual memory address.
  * This is illustrated by the printf command below.
 */

 src = mmap ((void*) MAP_LOCATION, fileSize, 
	     PROT_READ, MAP_SHARED | MAP_POPULATE, fdin, 0);

 /* memory map the output file after the input file */
 dst = mmap ((void*) MAP_LOCATION + fileSize , fileSize , 
	     PROT_READ | PROT_WRITE, MAP_SHARED, fdout, 0);


 printf("pid: %d\n", getpid());
 printf("Mapped src: 0x%p  and dst: 0x%p\n",src,dst);

 if (getenv("WORKAROUND") != NULL) {
   printf("workaround: reading output file before dirtying its pages\n");
   uint8_t sum = 0;
   uint8_t *ptr = (uint8_t*)dst;
   for (off_t i = 0; i < fileSize; i++) {
     sum += *ptr;
     ptr++;
   }
   printf("sum: %d\n", sum);
   sleep(1);
   printf("writing\n");
 }

 /* Copy the input file to the output file */
 memcpy (dst, src, fileSize);

 printf("memcpy done\n");

 // we should probably unmap memory and close the files
} /* main */

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [fuse-devel] Writing to FUSE via mmap extremely slow (sometimes) on some machines?
  2020-03-09 15:11                           ` Michael Stapelberg
@ 2020-03-12 15:45                             ` Michael Stapelberg
  0 siblings, 0 replies; 11+ messages in thread
From: Michael Stapelberg @ 2020-03-12 15:45 UTC (permalink / raw)
  To: Miklos Szeredi; +Cc: Tejun Heo, Jack Smith, fuse-devel, linux-fsdevel, linux-mm

[-- Attachment #1: Type: text/plain, Size: 3080 bytes --]

Find attached a patch which introduces a min_bw and max_bw limit for a
backing_dev_info. As outlined in the commit description, this can be
used to work around the issue until we have a better understanding of
how a real solution would look like.

Could we include this change in Linux? What would be the next step?

Thanks,

On Mon, Mar 9, 2020 at 4:11 PM Michael Stapelberg
<michael+lkml@stapelberg.ch> wrote:
>
> Thanks for clarifying. I have modified the mmap test program (see
> attached) to optionally read in the entire file when the WORKAROUND=
> environment variable is set, thereby preventing the FUSE reads in the
> write phase. I can now see a batch of reads, followed by a batch of
> writes.
>
> What’s interesting: when polling using “while :; do grep ^Bdi
> /sys/kernel/debug/bdi/0:93/stats; sleep 0.1; done” and running the
> mmap test program, I see:
>
> BdiDirtied:            3566304 kB
> BdiWritten:            3563616 kB
> BdiWriteBandwidth:       13596 kBps
>
> BdiDirtied:            3566304 kB
> BdiWritten:            3563616 kB
> BdiWriteBandwidth:       13596 kBps
>
> BdiDirtied:            3566528 kB (+224 kB) <-- starting to dirty pages
> BdiWritten:            3564064 kB (+448 kB) <-- starting to write
> BdiWriteBandwidth:       10700 kBps <-- only bandwidth update!
>
> BdiDirtied:            3668224 kB (+ 101696 kB) <-- all pages dirtied
> BdiWritten:            3565632 kB (+1568 kB)
> BdiWriteBandwidth:       10700 kBps
>
> BdiDirtied:            3668224 kB
> BdiWritten:            3665536 kB (+ 99904 kB) <-- all pages written
> BdiWriteBandwidth:       10700 kBps
>
> BdiDirtied:            3668224 kB
> BdiWritten:            3665536 kB
> BdiWriteBandwidth:       10700 kBps
>
> This seems to suggest that the bandwidth measurements only capture the
> rising slope of the transfer, but not the bulk of the transfer itself,
> resulting in inaccurate measurements. This effect is worsened when the
> test program doesn’t pre-read the output file and hence the kernel
> gets fewer FUSE write requests out.
>
> On Mon, Mar 9, 2020 at 3:36 PM Miklos Szeredi <miklos@szeredi.hu> wrote:
> >
> > On Mon, Mar 9, 2020 at 3:32 PM Michael Stapelberg
> > <michael+lkml@stapelberg.ch> wrote:
> > >
> > > Here’s one more thing I noticed: when polling
> > > /sys/kernel/debug/bdi/0:93/stats, I see that BdiDirtied and BdiWritten
> > > remain at their original values while the kernel sends FUSE read
> > > requests, and only goes up when the kernel transitions into sending
> > > FUSE write requests. Notably, the page dirtying throttling happens in
> > > the read phase, which is most likely why the write bandwidth is
> > > (correctly) measured as 0.
> > >
> > > Do we have any ideas on why the kernel sends FUSE reads at all?
> >
> > Memory writes (stores) need the memory page to be up-to-date wrt. the
> > backing file before proceeding.   This means that if the page hasn't
> > yet been cached by the kernel, it needs to be read first.
> >
> > Thanks,
> > Miklos

[-- Attachment #2: 0001-backing_dev_info-introduce-min_bw-max_bw-limits.patch --]
[-- Type: text/x-patch, Size: 5873 bytes --]

From 10c5fd0412ab71c14cca7a66c2407bfe3bb861af Mon Sep 17 00:00:00 2001
From: Michael Stapelberg <stapelberg@google.com>
Date: Tue, 10 Mar 2020 15:48:20 +0100
Subject: [PATCH] backing_dev_info: introduce min_bw/max_bw limits

This allows working around long-standing significant performance issues when
using mmap with files on FUSE file systems such as ObjFS.

The page-writeback code tries to measure how quick file system backing devices
are able to write data.

Unfortunately, our usage pattern seems to hit an unfortunate code path: the
kernel only ever measures the (non-representative) rising slope of the starting
transfer, but the transfer is already over before it could possibly measure the
representative steady-state.

As a consequence, the FUSE write bandwidth sinks steadily down to 0 (!) and
heavily throttles page dirtying in programs trying to write to FUSE.

This patch adds a knob which allows avoiding this situation entirely on a
per-file-system basis by restricting the minimum/maximum bandwidth.

There are no negative effects expected from applying this patch.

See also the discussion on the Linux Kernel Mailing List:

https://lore.kernel.org/linux-fsdevel/CANnVG6n=ySfe1gOr=0ituQidp56idGARDKHzP0hv=ERedeMrMA@mail.gmail.com/

To inspect the measured bandwidth, check the BdiWriteBandwidth field in
e.g. /sys/kernel/debug/bdi/0:93/stats.

To pin the measured bandwidth to its default of 100 MB/s, use:

    echo 25600 > /sys/class/bdi/0:42/min_bw
    echo 25600 > /sys/class/bdi/0:42/max_bw
---
 include/linux/backing-dev-defs.h |  2 ++
 include/linux/backing-dev.h      |  3 +++
 mm/backing-dev.c                 | 40 ++++++++++++++++++++++++++++++++
 mm/page-writeback.c              | 29 +++++++++++++++++++++++
 4 files changed, 74 insertions(+)

diff --git a/include/linux/backing-dev-defs.h b/include/linux/backing-dev-defs.h
index 4fc87dee005a..a29bcb8a799d 100644
--- a/include/linux/backing-dev-defs.h
+++ b/include/linux/backing-dev-defs.h
@@ -200,6 +200,8 @@ struct backing_dev_info {
 	unsigned int capabilities; /* Device capabilities */
 	unsigned int min_ratio;
 	unsigned int max_ratio, max_prop_frac;
+	u64 min_bw;
+	u64 max_bw;
 
 	/*
 	 * Sum of avg_write_bw of wbs with dirty inodes.  > 0 if there are
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index f88197c1ffc2..4490bd03aec1 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -111,6 +111,9 @@ static inline unsigned long wb_stat_error(void)
 int bdi_set_min_ratio(struct backing_dev_info *bdi, unsigned int min_ratio);
 int bdi_set_max_ratio(struct backing_dev_info *bdi, unsigned int max_ratio);
 
+int bdi_set_min_bw(struct backing_dev_info *bdi, u64 min_bw);
+int bdi_set_max_bw(struct backing_dev_info *bdi, u64 max_bw);
+
 /*
  * Flags in backing_dev_info::capability
  *
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 62f05f605fb5..5c10d4425976 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -201,6 +201,44 @@ static ssize_t max_ratio_store(struct device *dev,
 }
 BDI_SHOW(max_ratio, bdi->max_ratio)
 
+static ssize_t min_bw_store(struct device *dev,
+		struct device_attribute *attr, const char *buf, size_t count)
+{
+	struct backing_dev_info *bdi = dev_get_drvdata(dev);
+	unsigned long long limit;
+	ssize_t ret;
+
+	ret = kstrtoull(buf, 10, &limit);
+	if (ret < 0)
+		return ret;
+
+	ret = bdi_set_min_bw(bdi, limit);
+	if (!ret)
+		ret = count;
+
+	return ret;
+}
+BDI_SHOW(min_bw, bdi->min_bw)
+
+static ssize_t max_bw_store(struct device *dev,
+		struct device_attribute *attr, const char *buf, size_t count)
+{
+	struct backing_dev_info *bdi = dev_get_drvdata(dev);
+	unsigned long long limit;
+	ssize_t ret;
+
+	ret = kstrtoull(buf, 10, &limit);
+	if (ret < 0)
+		return ret;
+
+	ret = bdi_set_max_bw(bdi, limit);
+	if (!ret)
+		ret = count;
+
+	return ret;
+}
+BDI_SHOW(max_bw, bdi->max_bw)
+
 static ssize_t stable_pages_required_show(struct device *dev,
 					  struct device_attribute *attr,
 					  char *page)
@@ -216,6 +254,8 @@ static struct attribute *bdi_dev_attrs[] = {
 	&dev_attr_read_ahead_kb.attr,
 	&dev_attr_min_ratio.attr,
 	&dev_attr_max_ratio.attr,
+	&dev_attr_min_bw.attr,
+	&dev_attr_max_bw.attr,
 	&dev_attr_stable_pages_required.attr,
 	NULL,
 };
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 2caf780a42e7..c7c9eebc4c56 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -713,6 +713,22 @@ int bdi_set_max_ratio(struct backing_dev_info *bdi, unsigned max_ratio)
 }
 EXPORT_SYMBOL(bdi_set_max_ratio);
 
+int bdi_set_min_bw(struct backing_dev_info *bdi, u64 min_bw)
+{
+	spin_lock_bh(&bdi_lock);
+	bdi->min_bw = min_bw;
+	spin_unlock_bh(&bdi_lock);
+	return 0;
+}
+
+int bdi_set_max_bw(struct backing_dev_info *bdi, u64 max_bw)
+{
+	spin_lock_bh(&bdi_lock);
+	bdi->max_bw = max_bw;
+	spin_unlock_bh(&bdi_lock);
+	return 0;
+}
+
 static unsigned long dirty_freerun_ceiling(unsigned long thresh,
 					   unsigned long bg_thresh)
 {
@@ -1080,6 +1096,16 @@ static void wb_position_ratio(struct dirty_throttle_control *dtc)
 	dtc->pos_ratio = pos_ratio;
 }
 
+static u64 clamp_bw(struct backing_dev_info *bdi, u64 bw) {
+	if (bdi->min_bw > 0 && bw < bdi->min_bw) {
+	  bw = bdi->min_bw;
+	}
+	if (bdi->max_bw > 0 && bw > bdi->max_bw) {
+	  bw = bdi->max_bw;
+	}
+	return bw;
+}
+
 static void wb_update_write_bandwidth(struct bdi_writeback *wb,
 				      unsigned long elapsed,
 				      unsigned long written)
@@ -1103,12 +1129,15 @@ static void wb_update_write_bandwidth(struct bdi_writeback *wb,
 	bw *= HZ;
 	if (unlikely(elapsed > period)) {
 		bw = div64_ul(bw, elapsed);
+		bw = clamp_bw(wb->bdi, bw);
 		avg = bw;
 		goto out;
 	}
 	bw += (u64)wb->write_bandwidth * (period - elapsed);
 	bw >>= ilog2(period);
 
+	bw = clamp_bw(wb->bdi, bw);
+
 	/*
 	 * one more level of smoothing, for filtering out sudden spikes
 	 */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2020-03-12 15:45 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-24 13:29 Writing to FUSE via mmap extremely slow (sometimes) on some machines? Michael Stapelberg
     [not found] ` <CACQJH27s4HKzPgUkVT+FXWLGqJAAMYEkeKe7cidcesaYdE2Vog@mail.gmail.com>
     [not found]   ` <CANnVG6=Ghu5r44mTkr0uXx_ZrrWo2N5C_UEfM59110Zx+HApzw@mail.gmail.com>
     [not found]     ` <CAJfpegvzhfO7hg1sb_ttQF=dmBeg80WVkV8srF3VVYHw9ybV0w@mail.gmail.com>
     [not found]       ` <CANnVG6kSJJw-+jtjh-ate7CC3CsB2=ugnQpA9ACGFdMex8sftg@mail.gmail.com>
     [not found]         ` <CAJfpegtkEU9=3cvy8VNr4SnojErYFOTaCzUZLYvMuQMi050bPQ@mail.gmail.com>
2020-03-03 10:34           ` [fuse-devel] " Michael Stapelberg
2020-03-03 13:04           ` Tejun Heo
2020-03-03 14:03             ` Michael Stapelberg
2020-03-03 14:13               ` Tejun Heo
2020-03-03 14:21                 ` Michael Stapelberg
2020-03-03 14:25                   ` Tejun Heo
     [not found]                     ` <CANnVG6=yf82CcwmdmawmjTP2CskD-WhcvkLnkZs7hs0OG7KcTg@mail.gmail.com>
2020-03-09 14:32                       ` Michael Stapelberg
2020-03-09 14:36                         ` Miklos Szeredi
2020-03-09 15:11                           ` Michael Stapelberg
2020-03-12 15:45                             ` Michael Stapelberg

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).