From: "Andy Lutomirski" <luto@kernel.org>
To: "Peter Zijlstra (Intel)" <peterz@infradead.org>
Cc: "Jann Horn" <jannh@google.com>,
"Peter Oskolkov" <posk@google.com>,
"Peter Oskolkov" <posk@posk.io>, "Ingo Molnar" <mingo@redhat.com>,
"Thomas Gleixner" <tglx@linutronix.de>,
"Linux Kernel Mailing List" <linux-kernel@vger.kernel.org>,
"Linux API" <linux-api@vger.kernel.org>,
"Paul Turner" <pjt@google.com>, "Ben Segall" <bsegall@google.com>,
"Andrei Vagin" <avagin@google.com>,
"Thierry Delisle" <tdelisle@uwaterloo.ca>
Subject: Re: [PATCH 2/4 v0.5] sched/umcg: RFC: add userspace atomic helpers
Date: Tue, 14 Sep 2021 11:40:01 -0700 [thread overview]
Message-ID: <f6fdecfe-963d-4669-ae05-1d7192467a19@www.fastmail.com> (raw)
In-Reply-To: <YUDlzxLjNsW+oYGC@hirez.programming.kicks-ass.net>
On Tue, Sep 14, 2021, at 11:11 AM, Peter Zijlstra wrote:
> On Tue, Sep 14, 2021 at 09:52:08AM -0700, Andy Lutomirski wrote:
> > With a custom mapping, you don’t need to pin pages at all, I think.
> > As long as you can reconstruct the contents of the shared page and
> > you’re willing to do some slightly careful synchronization, you can
> > detect that the page is missing when you try to update it and skip the
> > update. The vm_ops->fault handler can repopulate the page the next
> > time it’s accessed.
>
> The point is that the moment we know we need to do this user-poke, is
> schedule(), which could be called while holding mmap_sem (it being a
> preemptable lock). Which means we cannot go and do faults.
That’s fine. The page would be in one or two states: present and writable by kernel or completely gone. If its present, the scheduler writes it. If it’s gone, the scheduler skips the write and the next fault fills it in.
>
> > All that being said, I feel like I’m missing something. The point of
> > this is to send what the old M:N folks called “scheduler activations”,
> > right? Wouldn’t it be more efficient to explicitly wake something
> > blockable/pollable and write the message into a more efficient data
> > structure? Polling one page per task from userspace seems like it
> > will have inherently high latency due to the polling interval and will
> > also have very poor locality. Or am I missing something?
>
> The idea was to link the user structures together in a (single) linked
> list. The server structure gets a list of all the blocked tasks. This
> avoids having to a full N iteration (like Java, they're talking stupid
> number of N).
>
> Polling should not happen, once we run out of runnable tasks, the server
> task gets ran again and it can instantly pick up all the blocked
> notifications.
>
How does the server task know when to read the linked list? And what’s wrong with a ring buffer or a syscall?
next prev parent reply other threads:[~2021-09-14 18:40 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-09-08 18:49 [PATCH 0/4 v0.5] sched/umcg: RFC UMCG patchset Peter Oskolkov
2021-09-08 18:49 ` [PATCH 1/4 v0.5] sched/umcg: add WF_CURRENT_CPU and externise ttwu Peter Oskolkov
2021-09-08 18:49 ` [PATCH 2/4 v0.5] sched/umcg: RFC: add userspace atomic helpers Peter Oskolkov
2021-09-08 23:38 ` Jann Horn
2021-09-09 1:16 ` Jann Horn
2021-09-09 19:06 ` Peter Oskolkov
2021-09-09 21:20 ` Jann Horn
2021-09-09 22:09 ` Peter Oskolkov
2021-09-09 23:13 ` Jann Horn
2021-09-14 16:52 ` Andy Lutomirski
2021-09-14 18:11 ` Peter Zijlstra
2021-09-14 18:40 ` Andy Lutomirski [this message]
2021-09-15 15:42 ` Peter Zijlstra
2021-09-15 16:50 ` Andy Lutomirski
2021-09-15 19:10 ` Peter Zijlstra
2021-09-14 8:07 ` Peter Zijlstra
2021-09-14 16:29 ` Peter Oskolkov
2021-09-14 18:04 ` Peter Zijlstra
2021-09-14 18:15 ` Peter Zijlstra
2021-09-14 18:29 ` Peter Oskolkov
2021-09-14 18:48 ` Peter Oskolkov
2021-09-08 18:49 ` [PATCH 3/4 v0.5] sched/umcg: RFC: implement UMCG syscalls Peter Oskolkov
2021-09-09 1:39 ` Jann Horn
2021-09-14 16:51 ` Peter Oskolkov
2021-09-08 18:49 ` [PATCH 4/4 v0.5] sched/umcg: add Documentation/userspace-api/umcg.rst Peter Oskolkov
2021-09-14 16:35 ` Tao Zhou
2021-09-14 16:57 ` Peter Oskolkov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f6fdecfe-963d-4669-ae05-1d7192467a19@www.fastmail.com \
--to=luto@kernel.org \
--cc=avagin@google.com \
--cc=bsegall@google.com \
--cc=jannh@google.com \
--cc=linux-api@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=pjt@google.com \
--cc=posk@google.com \
--cc=posk@posk.io \
--cc=tdelisle@uwaterloo.ca \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).