All of lore.kernel.org
 help / color / mirror / Atom feed
From: Boaz Harrosh <openosd@gmail.com>
To: Dan Williams <dan.j.williams@intel.com>,
	Matthew Wilcox <willy@infradead.org>
Cc: Seema Pandit <seema.pandit@intel.com>,
	linux-nvdimm <linux-nvdimm@lists.01.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	stable <stable@vger.kernel.org>,
	Robert Barror <robert.barror@intel.com>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	Jan Kara <jack@suse.cz>
Subject: Re: [PATCH] filesystem-dax: Disable PMD support
Date: Wed, 3 Jul 2019 03:22:49 +0300	[thread overview]
Message-ID: <fa9b9165-7910-1fbd-fb5b-78023936d2f2@gmail.com> (raw)
In-Reply-To: <CAPcyv4iEkN1o5HD6Gb9m5ohdAVQhmtiTDcFE+PMQczYx635Vwg@mail.gmail.com>

On 02/07/2019 18:37, Dan Williams wrote:
<>
> 
> I'd be inclined to do the brute force fix of not trying to get fancy
> with separate PTE/PMD waitqueues and then follow on with a more clever
> performance enhancement later. Thoughts about that?
> 

Sir Dan

I do not understand how separate waitqueues are any performance enhancement?
The all point of the waitqueues is that there is enough of them and the hash
function does a good radomization spread to effectively grab a single locker
per waitqueue unless the system is very contended and waitqueues are shared.
Which is good because it means you effectively need a back pressure to the app.
(Because pmem IO is mostly CPU bound with no long term sleeps I do not think
 you will ever get to that situation)

So the way I understand it having twice as many waitqueues serving two types
will be better performance over all then, segregating the types each with half
the number of queues.

(Regardless of the above problem of where the segregation is not race clean)

Thanks
Boaz
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

WARNING: multiple messages have this Message-ID (diff)
From: Boaz Harrosh <openosd@gmail.com>
To: Dan Williams <dan.j.williams@intel.com>,
	Matthew Wilcox <willy@infradead.org>
Cc: Seema Pandit <seema.pandit@intel.com>,
	linux-nvdimm <linux-nvdimm@lists.01.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	stable <stable@vger.kernel.org>,
	Robert Barror <robert.barror@intel.com>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	Jan Kara <jack@suse.cz>
Subject: Re: [PATCH] filesystem-dax: Disable PMD support
Date: Wed, 3 Jul 2019 03:22:49 +0300	[thread overview]
Message-ID: <fa9b9165-7910-1fbd-fb5b-78023936d2f2@gmail.com> (raw)
In-Reply-To: <CAPcyv4iEkN1o5HD6Gb9m5ohdAVQhmtiTDcFE+PMQczYx635Vwg@mail.gmail.com>

On 02/07/2019 18:37, Dan Williams wrote:
<>
> 
> I'd be inclined to do the brute force fix of not trying to get fancy
> with separate PTE/PMD waitqueues and then follow on with a more clever
> performance enhancement later. Thoughts about that?
> 

Sir Dan

I do not understand how separate waitqueues are any performance enhancement?
The all point of the waitqueues is that there is enough of them and the hash
function does a good radomization spread to effectively grab a single locker
per waitqueue unless the system is very contended and waitqueues are shared.
Which is good because it means you effectively need a back pressure to the app.
(Because pmem IO is mostly CPU bound with no long term sleeps I do not think
 you will ever get to that situation)

So the way I understand it having twice as many waitqueues serving two types
will be better performance over all then, segregating the types each with half
the number of queues.

(Regardless of the above problem of where the segregation is not race clean)

Thanks
Boaz

  reply	other threads:[~2019-07-03  0:22 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-27  0:15 [PATCH] filesystem-dax: Disable PMD support Dan Williams
2019-06-27  0:15 ` Dan Williams
2019-06-27 12:34 ` Matthew Wilcox
2019-06-27 12:34   ` Matthew Wilcox
2019-06-27 16:06   ` Dan Williams
2019-06-27 16:06     ` Dan Williams
2019-06-27 18:29     ` Dan Williams
2019-06-27 18:29       ` Dan Williams
2019-06-27 18:58       ` Dan Williams
2019-06-27 18:58         ` Dan Williams
2019-06-27 19:09         ` Dan Williams
2019-06-27 19:09           ` Dan Williams
2019-06-27 19:59           ` Matthew Wilcox
2019-06-27 19:59             ` Matthew Wilcox
2019-06-28  2:39             ` Dan Williams
2019-06-28  2:39               ` Dan Williams
2019-06-28 16:37               ` Matthew Wilcox
2019-06-28 16:37                 ` Matthew Wilcox
2019-06-28 16:39                 ` Dan Williams
2019-06-28 16:39                   ` Dan Williams
2019-06-28 16:54                   ` Matthew Wilcox
2019-06-28 16:54                     ` Matthew Wilcox
2019-06-29 16:03               ` Matthew Wilcox
2019-06-29 16:03                 ` Matthew Wilcox
2019-06-30  7:27                 ` Dan Williams
2019-06-30  7:27                   ` Dan Williams
2019-06-30  8:01                   ` Dan Williams
2019-06-30  8:01                     ` Dan Williams
2019-06-30 15:23                     ` Matthew Wilcox
2019-06-30 15:23                       ` Matthew Wilcox
2019-06-30 21:37                       ` Dan Williams
2019-06-30 21:37                         ` Dan Williams
2019-07-02  3:34                         ` Matthew Wilcox
2019-07-02  3:34                           ` Matthew Wilcox
2019-07-02 15:37                           ` Dan Williams
2019-07-02 15:37                             ` Dan Williams
2019-07-03  0:22                             ` Boaz Harrosh [this message]
2019-07-03  0:22                               ` Boaz Harrosh
2019-07-03  0:42                               ` Dan Williams
2019-07-03  0:42                                 ` Dan Williams
2019-07-03  1:39                                 ` Boaz Harrosh
2019-07-03  1:39                                   ` Boaz Harrosh
2019-07-01 12:11                       ` Jan Kara
2019-07-01 12:11                         ` Jan Kara
2019-07-03 15:47                         ` Matthew Wilcox
2019-07-03 15:47                           ` Matthew Wilcox
2019-07-04 16:40                           ` Jan Kara
2019-07-04 16:40                             ` Jan Kara

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=fa9b9165-7910-1fbd-fb5b-78023936d2f2@gmail.com \
    --to=openosd@gmail.com \
    --cc=dan.j.williams@intel.com \
    --cc=jack@suse.cz \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvdimm@lists.01.org \
    --cc=robert.barror@intel.com \
    --cc=seema.pandit@intel.com \
    --cc=stable@vger.kernel.org \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.