linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Amir Goldstein <amir73il@gmail.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: "Darrick J. Wong" <darrick.wong@oracle.com>,
	Dave Chinner <david@fromorbit.com>,
	linux-xfs <linux-xfs@vger.kernel.org>,
	Christoph Hellwig <hch@lst.de>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>
Subject: Re: [QUESTION] Long read latencies on mixed rw buffered IO
Date: Mon, 25 Mar 2019 21:57:46 +0200	[thread overview]
Message-ID: <CAOQ4uxiTRUe2EUYcuN5xi3SCw6C-=DM+yA1rsRKh_fi0YPEf6Q@mail.gmail.com> (raw)
In-Reply-To: <20190325194021.GJ10344@bombadil.infradead.org>

On Mon, Mar 25, 2019 at 9:40 PM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Mon, Mar 25, 2019 at 09:18:51PM +0200, Amir Goldstein wrote:
> > On Mon, Mar 25, 2019 at 8:22 PM Matthew Wilcox <willy@infradead.org> wrote:
> > > On Mon, Mar 25, 2019 at 07:30:39PM +0200, Amir Goldstein wrote:
> > > > On Mon, Mar 25, 2019 at 6:41 PM Matthew Wilcox <willy@infradead.org> wrote:
> > > > > I think it is a bug that we only wake readers at the front of the queue;
> > > > > I think we would get better performance if we wake all readers.  ie here:
> >
> > So I have no access to the test machine of former tests right now,
> > but when running the same filebench randomrw workload
> > (8 writers, 8 readers) on VM with 2 CPUs and SSD drive, results
> > are not looking good for this patch:
> >
> > --- v5.1-rc1 / xfs ---
> > rand-write1          852404ops    14202ops/s 110.9mb/s      0.6ms/op
> > [0.01ms - 553.45ms]
> > rand-read1           26117ops      435ops/s   3.4mb/s     18.4ms/op
> > [0.04ms - 632.29ms]
> > 61.088: IO Summary: 878521 ops 14636.774 ops/s 435/14202 rd/wr
> > 114.3mb/s   1.1ms/op
> >

--- v5.1-rc1 / xfs + patch v2 below ---
rand-write1          852487ops    14175ops/s 110.7mb/s      0.6ms/op
[0.01ms - 755.24ms]
rand-read1           23194ops      386ops/s   3.0mb/s     20.7ms/op
[0.03ms - 755.25ms]
61.187: IO Summary: 875681 ops 14560.980 ops/s 386/14175 rd/wr
113.8mb/s   1.1ms/op

Not as bad as v1. Only a little bit worse than master...
The whole deal with the read/write balance and on SSD, I imagine
the balance really changes. That's why I was skeptical about
one-size-fits all read/write balance.

Keeping an open mind.
Please throw more patches at me.
I will also test them on machine with spindles tomorrow.

Thanks,
Amir.


> > --- v5.1-rc1 / xfs + patch above ---
> > rand-write1          1117998ops    18621ops/s 145.5mb/s      0.4ms/op
> > [0.01ms - 788.19ms]
> > rand-read1           7089ops      118ops/s   0.9mb/s     67.4ms/op
> > [0.03ms - 792.67ms]
> > 61.091: IO Summary: 1125087 ops 18738.961 ops/s 118/18621 rd/wr
> > 146.4mb/s   0.8ms/op
> >
> > --- v5.1-rc1 / xfs + remove XFS_IOLOCK_SHARED from
> > xfs_file_buffered_aio_read ---
> > rand-write1          1025826ops    17091ops/s 133.5mb/s      0.5ms/op
> > [0.01ms - 909.20ms]
> > rand-read1           115162ops     1919ops/s  15.0mb/s      4.2ms/op
> > [0.00ms - 157.46ms]
> > 61.084: IO Summary: 1140988 ops 19009.369 ops/s 1919/17091 rd/wr
> > 148.5mb/s   0.8ms/op
> >
> > --- v5.1-rc1 / ext4 ---
> > rand-write1          867926ops    14459ops/s 113.0mb/s      0.6ms/op
> > [0.01ms - 886.89ms]
> > rand-read1           121893ops     2031ops/s  15.9mb/s      3.9ms/op
> > [0.00ms - 149.24ms]
> > 61.102: IO Summary: 989819 ops 16489.132 ops/s 2031/14459 rd/wr
> > 128.8mb/s   1.0ms/op
> >
> > So rw_semaphore fix is not in the ballpark, not even looking in the
> > right direction...
> >
> > Any other ideas to try?
>
> Sure!  Maybe the problem is walking the list over and over.  So add new
> readers to the front of the list if the head of the list is a reader;
> otherwise add them to the tail of the list.
>
> (this won't have quite the same effect as the previous patch because
> new readers coming in while the head of the list is a writer will still
> get jumbled with new writers, but it should be better than we have now,
> assuming the problem is that readers are being delayed behind writers).
>
> diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
> index fbe96341beee..56dbbaea90ee 100644
> --- a/kernel/locking/rwsem-xadd.c
> +++ b/kernel/locking/rwsem-xadd.c
> @@ -250,8 +250,15 @@ __rwsem_down_read_failed_common(struct rw_semaphore *sem, int state)
>                         return sem;
>                 }
>                 adjustment += RWSEM_WAITING_BIAS;
> +               list_add_tail(&waiter.list, &sem->wait_list);
> +       } else {
> +               struct rwsem_waiter *first = list_first_entry(&sem->wait_list,
> +                               typeof(*first), list);
> +               if (first->type == RWSEM_WAITING_FOR_READ)
> +                       list_add(&waiter.list, &sem->wait_list);
> +               else
> +                       list_add_tail(&waiter.list, &sem->wait_list);
>         }
> -       list_add_tail(&waiter.list, &sem->wait_list);
>
>         /* we're now waiting on the lock, but no longer actively locking */
>         count = atomic_long_add_return(adjustment, &sem->count);

  reply	other threads:[~2019-03-25 19:58 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CAOQ4uxi0pGczXBX7GRAFs88Uw0n1ERJZno3JSeZR71S1dXg+2w@mail.gmail.com>
     [not found] ` <20190325001044.GA23020@dastard>
2019-03-25  7:49   ` [QUESTION] Long read latencies on mixed rw buffered IO Amir Goldstein
2019-03-25 15:47     ` Darrick J. Wong
2019-03-25 16:41       ` Matthew Wilcox
2019-03-25 17:30         ` Amir Goldstein
2019-03-25 18:22           ` Matthew Wilcox
2019-03-25 19:18             ` Amir Goldstein
2019-03-25 19:40               ` Matthew Wilcox
2019-03-25 19:57                 ` Amir Goldstein [this message]
2019-03-25 23:48                   ` Dave Chinner
2019-03-26  3:44                     ` Amir Goldstein
2019-03-27  1:29                       ` Dave Chinner
2019-03-25 17:56       ` Amir Goldstein
2019-03-25 18:02         ` Christoph Hellwig
2019-03-25 18:44           ` Amir Goldstein
2019-03-25 23:43     ` Dave Chinner
2019-03-26  4:36       ` Amir Goldstein

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAOQ4uxiTRUe2EUYcuN5xi3SCw6C-=DM+yA1rsRKh_fi0YPEf6Q@mail.gmail.com' \
    --to=amir73il@gmail.com \
    --cc=darrick.wong@oracle.com \
    --cc=david@fromorbit.com \
    --cc=hch@lst.de \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-xfs@vger.kernel.org \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).