From: Andy Lutomirski <luto@kernel.org>
To: Dave Hansen <dave.hansen@intel.com>
Cc: Waiman Long <longman@redhat.com>,
Dave Hansen <dave.hansen@linux.intel.com>,
Andrew Lutomirski <luto@kernel.org>,
Peter Zijlstra <peterz@infradead.org>,
Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
"H. Peter Anvin" <hpa@zytor.com>, X86 ML <x86@kernel.org>,
LKML <linux-kernel@vger.kernel.org>
Subject: Re: [RFC PATCH] x86/mm/fault: Allow stack access below %rsp
Date: Sun, 4 Nov 2018 21:11:03 -0800 [thread overview]
Message-ID: <CALCETrV0RYZPbZ0vyRehvS=L-yZwLQ06=eov+=GkzFm0GoQQpQ@mail.gmail.com> (raw)
In-Reply-To: <47b1c477-46a7-6b04-7537-378e2910611b@intel.com>
On Fri, Nov 2, 2018 at 3:28 PM Dave Hansen <dave.hansen@intel.com> wrote:
>
> On 11/2/18 12:50 PM, Waiman Long wrote:
> > On 11/02/2018 03:44 PM, Dave Hansen wrote:
> >> On 11/2/18 12:40 PM, Waiman Long wrote:
> >>> The 64k+ limit check is kind of arbitrary. So the check is now removed
> >>> to just let expand_stack() decide if a segmentation fault should happen.
> >> With the 64k check removed, what's the next limit that we bump into? Is
> >> it just the stack_guard_gap space above the next-lowest VMA?
> > I think it is both the stack_guard_gap space above the next lowest VMA
> > and the rlimit(RLIMIT_STACK).
>
> The gap seems to be hundreds of megabytes, typically where RLIMIT_STACK
> is 8MB by default, so RLIMIT_STACK is likely to be the practical limit
> that will be hit. So, practically, we've taken a ~64k area that we
> would on-demand extend the stack into in one go, and turned that into a
> the full ~8MB area that you could have expanded into anyway, but all at
> once.
>
> That doesn't seem too insane, especially since we don't physically back
> the 8MB or anything. Logically, it also seems like you *should* be able
> to touch any bit of the stack within the rlimit.
>
> But, on the other hand, as our comments say: "Accessing the stack below
> %sp is always a bug." Have we been unsuccessful in convincing our gcc
> buddies of this?
FWIW, the old code is a bit bogus. Why are we restricting the range
of stack expending addresses for user code without restricting the
range of kernel uaccess addresses that would do the same thing?
So I think I agree with the patch.
next prev parent reply other threads:[~2018-11-05 5:11 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-11-02 19:40 [RFC PATCH] x86/mm/fault: Allow stack access below %rsp Waiman Long
2018-11-02 19:44 ` Dave Hansen
2018-11-02 19:50 ` Waiman Long
2018-11-02 20:11 ` Andy Lutomirski
2018-11-02 20:34 ` Waiman Long
2018-11-02 22:28 ` Dave Hansen
2018-11-05 5:11 ` Andy Lutomirski [this message]
2018-11-05 5:14 ` Andy Lutomirski
2018-11-05 17:20 ` Dave Hansen
2018-11-05 19:21 ` Andy Lutomirski
2018-11-05 16:27 ` Waiman Long
2018-11-05 17:51 ` Dave Hansen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CALCETrV0RYZPbZ0vyRehvS=L-yZwLQ06=eov+=GkzFm0GoQQpQ@mail.gmail.com' \
--to=luto@kernel.org \
--cc=bp@alien8.de \
--cc=dave.hansen@intel.com \
--cc=dave.hansen@linux.intel.com \
--cc=hpa@zytor.com \
--cc=linux-kernel@vger.kernel.org \
--cc=longman@redhat.com \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=tglx@linutronix.de \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).