From: Sami Tolvanen <samitolvanen@google.com>
To: Guo Ren <guoren@kernel.org>
Cc: Deepak Gupta <debug@rivosinc.com>,
palmer@dabbelt.com, linux-kernel@vger.kernel.org,
linux-riscv@lists.infradead.org,
Jisheng Zhang <jszhang@kernel.org>
Subject: Re: [PATCH v2] riscv: VMAP_STACK overflow detection thread-safe
Date: Mon, 24 Jul 2023 09:34:04 -0700 [thread overview]
Message-ID: <CABCJKueNhc8qCbZbHJqdCB+PHHy0u5ETP4uWfpWBRaOMX6U6hA@mail.gmail.com> (raw)
In-Reply-To: <CAJF2gTRFvSvEvQeDugdp73o7w4ArdtQ99JScEbLkaLnFcftVcA@mail.gmail.com>
On Thu, Jul 20, 2023 at 8:06 AM Guo Ren <guoren@kernel.org> wrote:
>
> On Thu, Jul 20, 2023 at 8:19 AM Sami Tolvanen <samitolvanen@google.com> wrote:
> >
> > Are you planning on resending this patch? I see it didn't gain much
> > traction last time, but this looks like a much cleaner solution for
> > selecting the overflow stack than having a `shadow_stack` and calling
> > to C to compute the per-CPU offset. The asm_per_cpu macro also would
> > come in handy when implementing CONFIG_SHADOW_CALL_STACK, which we'd
> > like to have on RISC-V too.
> I remember we ended up with an atomic lock mechanism instead of percpu
> offset, so what's the benefit of percpu style in overflow_stack path?
The benefit is not needing a separate temporary stack and locks just
to compute the per-CPU offset. With CONFIG_SHADOW_CALL_STACK, we would
also need a "shadow" shadow call stack in this case before calling to
C code, at which point computing the offsets directly in assembly is
just significantly cleaner and without concurrency issues.
Sami
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
next prev parent reply other threads:[~2023-07-24 16:34 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-11-24 9:48 [PATCH v2] riscv: VMAP_STACK overflow detection thread-safe Deepak Gupta
2022-11-24 9:53 ` Guo Ren
2022-11-25 20:13 ` Deepak Gupta
2022-11-24 15:26 ` Jisheng Zhang
2022-11-25 11:29 ` Jisheng Zhang
2022-11-25 21:41 ` Deepak Gupta
2022-11-25 21:35 ` Deepak Gupta
2023-07-20 0:18 ` Sami Tolvanen
2023-07-20 15:06 ` Guo Ren
2023-07-24 16:34 ` Sami Tolvanen [this message]
2023-07-28 12:01 ` Guo Ren
[not found] ` <CAKC1njSaR2d-T_UnVJDZYbROT5OLEjBJ+Aps-UHPFTefDc8=6g@mail.gmail.com>
2023-07-20 18:42 ` Deepak Gupta
2023-07-20 15:10 ` Guo Ren
2023-07-24 17:03 ` Sami Tolvanen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CABCJKueNhc8qCbZbHJqdCB+PHHy0u5ETP4uWfpWBRaOMX6U6hA@mail.gmail.com \
--to=samitolvanen@google.com \
--cc=debug@rivosinc.com \
--cc=guoren@kernel.org \
--cc=jszhang@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-riscv@lists.infradead.org \
--cc=palmer@dabbelt.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).