linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Andrea Arcangeli <andrea@suse.de>
To: Hugh Dickins <hugh@veritas.com>
Cc: "Martin J. Bligh" <Martin.Bligh@us.ibm.com>,
	linux-kernel <linux-kernel@vger.kernel.org>
Subject: Re: Scalability problem (kmap_lock) with -aa kernels
Date: Wed, 20 Mar 2002 19:56:47 +0100	[thread overview]
Message-ID: <20020320195647.K4268@dualathlon.random> (raw)
In-Reply-To: <257350410.1016612071@[10.10.2.3]> <Pine.LNX.4.21.0203201757030.1428-100000@localhost.localdomain>

On Wed, Mar 20, 2002 at 06:15:10PM +0000, Hugh Dickins wrote:
> My guess: persistent kmaps are okay, kmapped high pagetables are okay,
> persistent kmapped high pagetables are okay.  What's wrong is how we

In UP definitely :)

> flush_all_zero_pkmaps on all cpus, synchronously while holding the
> kmap_lock everyone needs to get a new kmap (and hopefully more often,
> just inc or dec the pkmap_count of kmap already got).  That's what
> cries out for redesign: it's served us well but should now be improved.

I'm not really sure if the time spent into the O(N) pass is the problem, I
asked him to decrease it and the contention increased (probably due the
increased frequency of the global flushes).

the problem is that the page->virtual cache is global, and so when you
have to drop the page->virtual from the virtual page you need to make a
global tlb flush. It cannot be a local tlb flush, this is the problem,
and if you want to make it a local flush but still having the cache you
need a page->virtual[NR_CPUS] that is not feasible or it would waste too
much ram. However if we could at least run the global tlb flush outside
the spinlock that would just be a nice scalability optimization though,
but even that doesn't seem obvious to implement because being the
virtual entry shared, if we want to make it available or to get it, we
must first do a global flush to be sure not to crash at the first
schedule().

One way that would be completly scalar in the copy-users, that I
outlined in a previous email of the thread, is to make the pool local to
the cpu, but without page->virtual cache and by binding the task to the
current cpu and unbinding it at the kunmap. I don't see other ways to
get rid of the scalability issues in all the places.

Andrea

  reply	other threads:[~2002-03-20 18:57 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2002-03-20 16:14 Scalability problem (kmap_lock) with -aa kernels Martin J. Bligh
2002-03-20 16:39 ` Andrea Arcangeli
2002-03-20 17:41   ` Rik van Riel
2002-03-20 18:26     ` Andrea Arcangeli
2002-03-20 19:35       ` Rik van Riel
2002-03-20 18:16   ` Martin J. Bligh
2002-03-20 18:29     ` Martin J. Bligh
2002-03-20 18:40     ` Andrea Arcangeli
2002-03-20 18:15 ` Hugh Dickins
2002-03-20 18:56   ` Andrea Arcangeli [this message]
  -- strict thread matches above, loose matches on Subject: below --
2002-03-19  4:25 Martin J. Bligh
2002-03-19  8:58 ` Rik van Riel
2002-03-20  1:40 ` Andrea Arcangeli
2002-03-20  6:15   ` Martin J. Bligh
2002-03-20 12:30     ` Andrea Arcangeli

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20020320195647.K4268@dualathlon.random \
    --to=andrea@suse.de \
    --cc=Martin.Bligh@us.ibm.com \
    --cc=hugh@veritas.com \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).