From: Guo Ren <guoren@kernel.org> To: Sami Tolvanen <samitolvanen@google.com> Cc: Deepak Gupta <debug@rivosinc.com>, palmer@dabbelt.com, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Jisheng Zhang <jszhang@kernel.org> Subject: Re: [PATCH v2] riscv: VMAP_STACK overflow detection thread-safe Date: Fri, 28 Jul 2023 20:01:07 +0800 [thread overview] Message-ID: <CAJF2gTThgCp8-KzghR0cFqDWXxZ2byLtLVF91GdRvBid7U+_aA@mail.gmail.com> (raw) In-Reply-To: <CABCJKueNhc8qCbZbHJqdCB+PHHy0u5ETP4uWfpWBRaOMX6U6hA@mail.gmail.com> On Tue, Jul 25, 2023 at 12:34 AM Sami Tolvanen <samitolvanen@google.com> wrote: > > On Thu, Jul 20, 2023 at 8:06 AM Guo Ren <guoren@kernel.org> wrote: > > > > On Thu, Jul 20, 2023 at 8:19 AM Sami Tolvanen <samitolvanen@google.com> wrote: > > > > > > Are you planning on resending this patch? I see it didn't gain much > > > traction last time, but this looks like a much cleaner solution for > > > selecting the overflow stack than having a `shadow_stack` and calling > > > to C to compute the per-CPU offset. The asm_per_cpu macro also would > > > come in handy when implementing CONFIG_SHADOW_CALL_STACK, which we'd > > > like to have on RISC-V too. > > I remember we ended up with an atomic lock mechanism instead of percpu > > offset, so what's the benefit of percpu style in overflow_stack path? > > The benefit is not needing a separate temporary stack and locks just Oh, you convinced me it could save another 1KB of memory. Acked-by: Guo Ren <guoren@kernel.org> > to compute the per-CPU offset. With CONFIG_SHADOW_CALL_STACK, we would > also need a "shadow" shadow call stack in this case before calling to > C code, at which point computing the offsets directly in assembly is > just significantly cleaner and without concurrency issues. > > Sami -- Best Regards Guo Ren _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv
WARNING: multiple messages have this Message-ID (diff)
From: Guo Ren <guoren@kernel.org> To: Sami Tolvanen <samitolvanen@google.com> Cc: Deepak Gupta <debug@rivosinc.com>, palmer@dabbelt.com, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Jisheng Zhang <jszhang@kernel.org> Subject: Re: [PATCH v2] riscv: VMAP_STACK overflow detection thread-safe Date: Fri, 28 Jul 2023 20:01:07 +0800 [thread overview] Message-ID: <CAJF2gTThgCp8-KzghR0cFqDWXxZ2byLtLVF91GdRvBid7U+_aA@mail.gmail.com> (raw) In-Reply-To: <CABCJKueNhc8qCbZbHJqdCB+PHHy0u5ETP4uWfpWBRaOMX6U6hA@mail.gmail.com> On Tue, Jul 25, 2023 at 12:34 AM Sami Tolvanen <samitolvanen@google.com> wrote: > > On Thu, Jul 20, 2023 at 8:06 AM Guo Ren <guoren@kernel.org> wrote: > > > > On Thu, Jul 20, 2023 at 8:19 AM Sami Tolvanen <samitolvanen@google.com> wrote: > > > > > > Are you planning on resending this patch? I see it didn't gain much > > > traction last time, but this looks like a much cleaner solution for > > > selecting the overflow stack than having a `shadow_stack` and calling > > > to C to compute the per-CPU offset. The asm_per_cpu macro also would > > > come in handy when implementing CONFIG_SHADOW_CALL_STACK, which we'd > > > like to have on RISC-V too. > > I remember we ended up with an atomic lock mechanism instead of percpu > > offset, so what's the benefit of percpu style in overflow_stack path? > > The benefit is not needing a separate temporary stack and locks just Oh, you convinced me it could save another 1KB of memory. Acked-by: Guo Ren <guoren@kernel.org> > to compute the per-CPU offset. With CONFIG_SHADOW_CALL_STACK, we would > also need a "shadow" shadow call stack in this case before calling to > C code, at which point computing the offsets directly in assembly is > just significantly cleaner and without concurrency issues. > > Sami -- Best Regards Guo Ren
next prev parent reply other threads:[~2023-07-28 12:01 UTC|newest] Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top 2022-11-24 9:48 [PATCH v2] riscv: VMAP_STACK overflow detection thread-safe Deepak Gupta 2022-11-24 9:48 ` Deepak Gupta 2022-11-24 9:53 ` Guo Ren 2022-11-24 9:53 ` Guo Ren 2022-11-25 20:13 ` Deepak Gupta 2022-11-25 20:13 ` Deepak Gupta 2022-11-24 15:26 ` Jisheng Zhang 2022-11-24 15:26 ` Jisheng Zhang 2022-11-25 11:29 ` Jisheng Zhang 2022-11-25 11:29 ` Jisheng Zhang 2022-11-25 21:41 ` Deepak Gupta 2022-11-25 21:41 ` Deepak Gupta 2022-11-25 21:35 ` Deepak Gupta 2022-11-25 21:35 ` Deepak Gupta 2023-07-20 0:18 ` Sami Tolvanen 2023-07-20 0:18 ` Sami Tolvanen 2023-07-20 15:06 ` Guo Ren 2023-07-20 15:06 ` Guo Ren 2023-07-24 16:34 ` Sami Tolvanen 2023-07-24 16:34 ` Sami Tolvanen 2023-07-28 12:01 ` Guo Ren [this message] 2023-07-28 12:01 ` Guo Ren [not found] ` <CAKC1njSaR2d-T_UnVJDZYbROT5OLEjBJ+Aps-UHPFTefDc8=6g@mail.gmail.com> 2023-07-20 18:42 ` Deepak Gupta 2023-07-20 18:42 ` Deepak Gupta 2023-07-20 15:10 ` Guo Ren 2023-07-20 15:10 ` Guo Ren 2023-07-24 17:03 ` Sami Tolvanen 2023-07-24 17:03 ` Sami Tolvanen
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=CAJF2gTThgCp8-KzghR0cFqDWXxZ2byLtLVF91GdRvBid7U+_aA@mail.gmail.com \ --to=guoren@kernel.org \ --cc=debug@rivosinc.com \ --cc=jszhang@kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-riscv@lists.infradead.org \ --cc=palmer@dabbelt.com \ --cc=samitolvanen@google.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.