All of lore.kernel.org
 help / color / mirror / Atom feed
From: Will Deacon <will@kernel.org>
To: Mark Brown <broonie@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>,
	Jonathan Corbet <corbet@lwn.net>,
	Andrew Morton <akpm@linux-foundation.org>,
	Marc Zyngier <maz@kernel.org>,
	Oliver Upton <oliver.upton@linux.dev>,
	James Morse <james.morse@arm.com>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	Arnd Bergmann <arnd@arndb.de>, Oleg Nesterov <oleg@redhat.com>,
	Eric Biederman <ebiederm@xmission.com>,
	Kees Cook <keescook@chromium.org>, Shuah Khan <shuah@kernel.org>,
	"Rick P. Edgecombe" <rick.p.edgecombe@intel.com>,
	Deepak Gupta <debug@rivosinc.com>,
	Ard Biesheuvel <ardb@kernel.org>,
	Szabolcs Nagy <Szabolcs.Nagy@arm.com>,
	"H.J. Lu" <hjl.tools@gmail.com>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	Albert Ou <aou@eecs.berkeley.edu>,
	linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org,
	kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org,
	linux-arch@vger.kernel.org, linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-riscv@lists.infradead.org
Subject: Re: [PATCH v3 00/36] arm64/gcs: Provide support for GCS in userspace
Date: Thu, 10 Aug 2023 10:40:16 +0100	[thread overview]
Message-ID: <20230810094016.GA5365@willie-the-truck> (raw)
In-Reply-To: <f279ec25-e1c7-48e6-bd9d-5c753e829aad@sirena.org.uk>

On Tue, Aug 08, 2023 at 09:25:11PM +0100, Mark Brown wrote:
> On Tue, Aug 08, 2023 at 02:38:58PM +0100, Will Deacon wrote:
> 
> > But seriously, I think the question is more about what this brings us
> > *on top of* SCS, since for the forseeable future folks that care about
> > this stuff (like Android) will be using SCS. GCS on its own doesn't make
> > sense to me, given the recompilation effort to remove SCS and the lack
> > of hardware, so then you have to look at what it brings in addition to
> > GCS and balance that against the performance cost.
> 
> > Given that, is anybody planning to ship a distribution with this enabled?
> 
> I'm not sure that your assumption that the only people would would
> consider deploying this are those who have deployed SCS is a valid one,
> SCS users are definitely part of the mix but GCS is expected to be much
> more broadly applicable.  As you say SCS is very invasive, requires a
> rebuild of everything with different code generated and as Szabolcs
> outlined has ABI challenges for general distros.  Any code built (or
> JITed) with anything other than clang is going to require some explicit
> support to do SCS (eg, the kernel's SCS support does nothing for
> assembly code) and there's a bunch of runtime support.  It's very much a
> specialist feature, mainly practical in well controlled somewhat
> vertical systems - I've not seen any suggestion that general purpose
> distros are considering using it.

I've also seen no suggestion that general purpose distros are considering
GCS -- that's what I'm asking about here, and also saying that we shouldn't
rush in an ABI without confidence that it actually works beyond unit tests
(although it's great that you wrote selftests!).

> In contrast in the case of GCS one of the nice features is that for most
> code it's very much non-invasive, much less so than things like PAC/BTI
> and SCS, which means that the audience is much wider than it is for SCS
> - it's a *much* easier sell for general purpose distros to enable GCS
> than to enable SCS.

This sounds compelling, but has anybody tried running significant parts of a
distribution (e.g. running Debian source package tests, booting Android,
using a browser, running QEMU) with GCS enabled? I can well imagine
non-trivial applications violating both assumptions of the architecture and
the ABI.

> For the majority of programs all the support that is needed is in the
> kernel and libgcc/libc, there's no impact on the code generation.  There
> are no extra instructions in the normal flow which will impact systems
> without the feature, and there are no extra registers in use, so even if
> the binaries are run on a system without GCS or for some reason someone
> decides that it's best to turn the feature off on a system that is capable
> of using it the fact that it's just using the existing bl/ret pairs means
> that there is minimal overhead.  This all means that it's much more
> practical to deploy in general purpose distros.  On the other hand when
> active it affects all code, this improves coverage but the improved
> coverage can be a worry.
> 
> I can see that systems that have gone through all the effort of enabling
> SCS might not rush to implement GCS, though there should be no harm in
> having the two features running side by side beyond the doubled memory
> requirements so you can at least have a transition plan (GCS does have
> some allowances which enable hardware to mitigate some of the memory
> bandwidth requirements at least).  You do still get the benefit of the
> additional hardware protections GCS offers, and the coverage of all
> branch and ret instructions will be of interest both for security and
> for unwinders.  It's definitely offers less of an incremental
> improvement on top of SCS than it is without SCS though.
> 
> GCS and SCS are comparable features in terms of the protection they aim
> to add but their system integration impacts are different.

Again, this sounds plausible but I don't see any data to back it up so I
don't really have a feeling as to how true it is.

> > If not, why are we bothering? If so, how much of that distribution has
> > been brought up and how does the "dynamic linker or other startup code"
> > decide what to do?
> 
> There is active interest in the x86 shadow stack support from distros,
> GCS is a lot earlier on in the process but isn't fundamentally different
> so it is expected that this will translate.  There is also a chicken and
> egg thing where upstream support gates a lot of people's interest, what
> people will consider carrying out of tree is different to what they'll
> enable. 

I'm not saying we should wait until distros are committed, but Arm should
be able to do that work on a fork, exactly like we did for the arm64
bringup. We have the fastmodel, so running interesting stuff with GCS
enabled should be dead easy, no?

> Architecture specific feedback on the implementation can also be fed back
> into the still ongoing review of the ABI that is being established for
> x86, there will doubtless be pushback about variations between
> architectures from userspace people.
> 
> The userspace decision about enablement will primarily be driven by an
> ELF marking which the dynamic linker looks at to determine if the
> binaries it is loading can support GCS, a later dlopen() can either
> refuse to load an additional library if the process currently has GCS
> enabled, ignore the issue and hope things work out (there's a good
> chance they will but obviously that's not safe) or (more complicatedly)
> go round all the threads and disable GCS before proceeding.  The main
> reason any sort of rebuild is required for most code is to add the ELF
> marking, there will be a compiler option to select it.  Static binaries
> should know if everything linked into them is GCS compatible and enable
> GCS if appropriate in their startup code.
> 
> The majority of the full distro work at this point is on the x86 side
> given the hardware availability, we are looking at that within Arm of
> course.  I'm not aware of any huge blockers we have encountered thus
> far.

Ok, so it sounds like you've started something then? How far have you got?

> It is fair to say that there's less active interest on the arm64 side
> since as you say the feature is quite a way off making it's way into
> hardware, though there are also long lead times on getting the full
> software stack to end users and kernel support becomes a blocker for
> the userspace stack.
>
> 
> > After the mess we had with BTI and mprotect(), I'm hesitant to merge
> > features like this without knowing that the ABI can stand real code.
> 
> The equivalent x86 feature is in current hardware[1], there has been
> some distro work (I believe one of the issues x86 has had is coping with
> a distro which shipped an early out of tree ABI, that experience has
> informed the current ABI which as the cover letter says we are following
> closely).  AIUI the biggest blocker on userspace work for x86 right now
> is landing the kernel side of things so that everyone else has a stable
> ABI to work from and don't need to carry out of tree patches, I've heard
> frustration expressed at the deployment being held up.  IIRC Fedora were
> on the leading edge in terms of active interest, they tend to be given
> that they're one of the most quickly iterating distros.  
> 
> This definitely does rely fairly heavily on the x86 experience for
> confidence in the ABI, and to be honest one of the big unknowns at this
> point is if you or Catalin will have opinions on how things are being
> done.

While we'd be daft not to look at what the x86 folks are doing, I don't
think we should rely solely on them to inform the design for arm64 when
it should be relatively straightforward to prototype the distro work on
the model. There's also no rush to land the kernel changes given that
GCS hardware doesn't exist.

Will

WARNING: multiple messages have this Message-ID (diff)
From: Will Deacon <will@kernel.org>
To: Mark Brown <broonie@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>,
	Jonathan Corbet <corbet@lwn.net>,
	Andrew Morton <akpm@linux-foundation.org>,
	Marc Zyngier <maz@kernel.org>,
	Oliver Upton <oliver.upton@linux.dev>,
	James Morse <james.morse@arm.com>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	Arnd Bergmann <arnd@arndb.de>, Oleg Nesterov <oleg@redhat.com>,
	Eric Biederman <ebiederm@xmission.com>,
	Kees Cook <keescook@chromium.org>, Shuah Khan <shuah@kernel.org>,
	"Rick P. Edgecombe" <rick.p.edgecombe@intel.com>,
	Deepak Gupta <debug@rivosinc.com>,
	Ard Biesheuvel <ardb@kernel.org>,
	Szabolcs Nagy <Szabolcs.Nagy@arm.com>,
	"H.J. Lu" <hjl.tools@gmail.com>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	Albert Ou <aou@eecs.berkeley.edu>,
	linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org,
	kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org,
	linux-arch@vger.kernel.org, linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-riscv@lists.infradead.org
Subject: Re: [PATCH v3 00/36] arm64/gcs: Provide support for GCS in userspace
Date: Thu, 10 Aug 2023 10:40:16 +0100	[thread overview]
Message-ID: <20230810094016.GA5365@willie-the-truck> (raw)
In-Reply-To: <f279ec25-e1c7-48e6-bd9d-5c753e829aad@sirena.org.uk>

On Tue, Aug 08, 2023 at 09:25:11PM +0100, Mark Brown wrote:
> On Tue, Aug 08, 2023 at 02:38:58PM +0100, Will Deacon wrote:
> 
> > But seriously, I think the question is more about what this brings us
> > *on top of* SCS, since for the forseeable future folks that care about
> > this stuff (like Android) will be using SCS. GCS on its own doesn't make
> > sense to me, given the recompilation effort to remove SCS and the lack
> > of hardware, so then you have to look at what it brings in addition to
> > GCS and balance that against the performance cost.
> 
> > Given that, is anybody planning to ship a distribution with this enabled?
> 
> I'm not sure that your assumption that the only people would would
> consider deploying this are those who have deployed SCS is a valid one,
> SCS users are definitely part of the mix but GCS is expected to be much
> more broadly applicable.  As you say SCS is very invasive, requires a
> rebuild of everything with different code generated and as Szabolcs
> outlined has ABI challenges for general distros.  Any code built (or
> JITed) with anything other than clang is going to require some explicit
> support to do SCS (eg, the kernel's SCS support does nothing for
> assembly code) and there's a bunch of runtime support.  It's very much a
> specialist feature, mainly practical in well controlled somewhat
> vertical systems - I've not seen any suggestion that general purpose
> distros are considering using it.

I've also seen no suggestion that general purpose distros are considering
GCS -- that's what I'm asking about here, and also saying that we shouldn't
rush in an ABI without confidence that it actually works beyond unit tests
(although it's great that you wrote selftests!).

> In contrast in the case of GCS one of the nice features is that for most
> code it's very much non-invasive, much less so than things like PAC/BTI
> and SCS, which means that the audience is much wider than it is for SCS
> - it's a *much* easier sell for general purpose distros to enable GCS
> than to enable SCS.

This sounds compelling, but has anybody tried running significant parts of a
distribution (e.g. running Debian source package tests, booting Android,
using a browser, running QEMU) with GCS enabled? I can well imagine
non-trivial applications violating both assumptions of the architecture and
the ABI.

> For the majority of programs all the support that is needed is in the
> kernel and libgcc/libc, there's no impact on the code generation.  There
> are no extra instructions in the normal flow which will impact systems
> without the feature, and there are no extra registers in use, so even if
> the binaries are run on a system without GCS or for some reason someone
> decides that it's best to turn the feature off on a system that is capable
> of using it the fact that it's just using the existing bl/ret pairs means
> that there is minimal overhead.  This all means that it's much more
> practical to deploy in general purpose distros.  On the other hand when
> active it affects all code, this improves coverage but the improved
> coverage can be a worry.
> 
> I can see that systems that have gone through all the effort of enabling
> SCS might not rush to implement GCS, though there should be no harm in
> having the two features running side by side beyond the doubled memory
> requirements so you can at least have a transition plan (GCS does have
> some allowances which enable hardware to mitigate some of the memory
> bandwidth requirements at least).  You do still get the benefit of the
> additional hardware protections GCS offers, and the coverage of all
> branch and ret instructions will be of interest both for security and
> for unwinders.  It's definitely offers less of an incremental
> improvement on top of SCS than it is without SCS though.
> 
> GCS and SCS are comparable features in terms of the protection they aim
> to add but their system integration impacts are different.

Again, this sounds plausible but I don't see any data to back it up so I
don't really have a feeling as to how true it is.

> > If not, why are we bothering? If so, how much of that distribution has
> > been brought up and how does the "dynamic linker or other startup code"
> > decide what to do?
> 
> There is active interest in the x86 shadow stack support from distros,
> GCS is a lot earlier on in the process but isn't fundamentally different
> so it is expected that this will translate.  There is also a chicken and
> egg thing where upstream support gates a lot of people's interest, what
> people will consider carrying out of tree is different to what they'll
> enable. 

I'm not saying we should wait until distros are committed, but Arm should
be able to do that work on a fork, exactly like we did for the arm64
bringup. We have the fastmodel, so running interesting stuff with GCS
enabled should be dead easy, no?

> Architecture specific feedback on the implementation can also be fed back
> into the still ongoing review of the ABI that is being established for
> x86, there will doubtless be pushback about variations between
> architectures from userspace people.
> 
> The userspace decision about enablement will primarily be driven by an
> ELF marking which the dynamic linker looks at to determine if the
> binaries it is loading can support GCS, a later dlopen() can either
> refuse to load an additional library if the process currently has GCS
> enabled, ignore the issue and hope things work out (there's a good
> chance they will but obviously that's not safe) or (more complicatedly)
> go round all the threads and disable GCS before proceeding.  The main
> reason any sort of rebuild is required for most code is to add the ELF
> marking, there will be a compiler option to select it.  Static binaries
> should know if everything linked into them is GCS compatible and enable
> GCS if appropriate in their startup code.
> 
> The majority of the full distro work at this point is on the x86 side
> given the hardware availability, we are looking at that within Arm of
> course.  I'm not aware of any huge blockers we have encountered thus
> far.

Ok, so it sounds like you've started something then? How far have you got?

> It is fair to say that there's less active interest on the arm64 side
> since as you say the feature is quite a way off making it's way into
> hardware, though there are also long lead times on getting the full
> software stack to end users and kernel support becomes a blocker for
> the userspace stack.
>
> 
> > After the mess we had with BTI and mprotect(), I'm hesitant to merge
> > features like this without knowing that the ABI can stand real code.
> 
> The equivalent x86 feature is in current hardware[1], there has been
> some distro work (I believe one of the issues x86 has had is coping with
> a distro which shipped an early out of tree ABI, that experience has
> informed the current ABI which as the cover letter says we are following
> closely).  AIUI the biggest blocker on userspace work for x86 right now
> is landing the kernel side of things so that everyone else has a stable
> ABI to work from and don't need to carry out of tree patches, I've heard
> frustration expressed at the deployment being held up.  IIRC Fedora were
> on the leading edge in terms of active interest, they tend to be given
> that they're one of the most quickly iterating distros.  
> 
> This definitely does rely fairly heavily on the x86 experience for
> confidence in the ABI, and to be honest one of the big unknowns at this
> point is if you or Catalin will have opinions on how things are being
> done.

While we'd be daft not to look at what the x86 folks are doing, I don't
think we should rely solely on them to inform the design for arm64 when
it should be relatively straightforward to prototype the distro work on
the model. There's also no rush to land the kernel changes given that
GCS hardware doesn't exist.

Will

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

WARNING: multiple messages have this Message-ID (diff)
From: Will Deacon <will@kernel.org>
To: Mark Brown <broonie@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>,
	Jonathan Corbet <corbet@lwn.net>,
	Andrew Morton <akpm@linux-foundation.org>,
	Marc Zyngier <maz@kernel.org>,
	Oliver Upton <oliver.upton@linux.dev>,
	James Morse <james.morse@arm.com>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	Arnd Bergmann <arnd@arndb.de>, Oleg Nesterov <oleg@redhat.com>,
	Eric Biederman <ebiederm@xmission.com>,
	Kees Cook <keescook@chromium.org>, Shuah Khan <shuah@kernel.org>,
	"Rick P. Edgecombe" <rick.p.edgecombe@intel.com>,
	Deepak Gupta <debug@rivosinc.com>,
	Ard Biesheuvel <ardb@kernel.org>,
	Szabolcs Nagy <Szabolcs.Nagy@arm.com>,
	"H.J. Lu" <hjl.tools@gmail.com>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	Albert Ou <aou@eecs.berkeley.edu>,
	linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org,
	kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org,
	linux-arch@vger.kernel.org, linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-riscv@lists.infradead.org
Subject: Re: [PATCH v3 00/36] arm64/gcs: Provide support for GCS in userspace
Date: Thu, 10 Aug 2023 10:40:16 +0100	[thread overview]
Message-ID: <20230810094016.GA5365@willie-the-truck> (raw)
In-Reply-To: <f279ec25-e1c7-48e6-bd9d-5c753e829aad@sirena.org.uk>

On Tue, Aug 08, 2023 at 09:25:11PM +0100, Mark Brown wrote:
> On Tue, Aug 08, 2023 at 02:38:58PM +0100, Will Deacon wrote:
> 
> > But seriously, I think the question is more about what this brings us
> > *on top of* SCS, since for the forseeable future folks that care about
> > this stuff (like Android) will be using SCS. GCS on its own doesn't make
> > sense to me, given the recompilation effort to remove SCS and the lack
> > of hardware, so then you have to look at what it brings in addition to
> > GCS and balance that against the performance cost.
> 
> > Given that, is anybody planning to ship a distribution with this enabled?
> 
> I'm not sure that your assumption that the only people would would
> consider deploying this are those who have deployed SCS is a valid one,
> SCS users are definitely part of the mix but GCS is expected to be much
> more broadly applicable.  As you say SCS is very invasive, requires a
> rebuild of everything with different code generated and as Szabolcs
> outlined has ABI challenges for general distros.  Any code built (or
> JITed) with anything other than clang is going to require some explicit
> support to do SCS (eg, the kernel's SCS support does nothing for
> assembly code) and there's a bunch of runtime support.  It's very much a
> specialist feature, mainly practical in well controlled somewhat
> vertical systems - I've not seen any suggestion that general purpose
> distros are considering using it.

I've also seen no suggestion that general purpose distros are considering
GCS -- that's what I'm asking about here, and also saying that we shouldn't
rush in an ABI without confidence that it actually works beyond unit tests
(although it's great that you wrote selftests!).

> In contrast in the case of GCS one of the nice features is that for most
> code it's very much non-invasive, much less so than things like PAC/BTI
> and SCS, which means that the audience is much wider than it is for SCS
> - it's a *much* easier sell for general purpose distros to enable GCS
> than to enable SCS.

This sounds compelling, but has anybody tried running significant parts of a
distribution (e.g. running Debian source package tests, booting Android,
using a browser, running QEMU) with GCS enabled? I can well imagine
non-trivial applications violating both assumptions of the architecture and
the ABI.

> For the majority of programs all the support that is needed is in the
> kernel and libgcc/libc, there's no impact on the code generation.  There
> are no extra instructions in the normal flow which will impact systems
> without the feature, and there are no extra registers in use, so even if
> the binaries are run on a system without GCS or for some reason someone
> decides that it's best to turn the feature off on a system that is capable
> of using it the fact that it's just using the existing bl/ret pairs means
> that there is minimal overhead.  This all means that it's much more
> practical to deploy in general purpose distros.  On the other hand when
> active it affects all code, this improves coverage but the improved
> coverage can be a worry.
> 
> I can see that systems that have gone through all the effort of enabling
> SCS might not rush to implement GCS, though there should be no harm in
> having the two features running side by side beyond the doubled memory
> requirements so you can at least have a transition plan (GCS does have
> some allowances which enable hardware to mitigate some of the memory
> bandwidth requirements at least).  You do still get the benefit of the
> additional hardware protections GCS offers, and the coverage of all
> branch and ret instructions will be of interest both for security and
> for unwinders.  It's definitely offers less of an incremental
> improvement on top of SCS than it is without SCS though.
> 
> GCS and SCS are comparable features in terms of the protection they aim
> to add but their system integration impacts are different.

Again, this sounds plausible but I don't see any data to back it up so I
don't really have a feeling as to how true it is.

> > If not, why are we bothering? If so, how much of that distribution has
> > been brought up and how does the "dynamic linker or other startup code"
> > decide what to do?
> 
> There is active interest in the x86 shadow stack support from distros,
> GCS is a lot earlier on in the process but isn't fundamentally different
> so it is expected that this will translate.  There is also a chicken and
> egg thing where upstream support gates a lot of people's interest, what
> people will consider carrying out of tree is different to what they'll
> enable. 

I'm not saying we should wait until distros are committed, but Arm should
be able to do that work on a fork, exactly like we did for the arm64
bringup. We have the fastmodel, so running interesting stuff with GCS
enabled should be dead easy, no?

> Architecture specific feedback on the implementation can also be fed back
> into the still ongoing review of the ABI that is being established for
> x86, there will doubtless be pushback about variations between
> architectures from userspace people.
> 
> The userspace decision about enablement will primarily be driven by an
> ELF marking which the dynamic linker looks at to determine if the
> binaries it is loading can support GCS, a later dlopen() can either
> refuse to load an additional library if the process currently has GCS
> enabled, ignore the issue and hope things work out (there's a good
> chance they will but obviously that's not safe) or (more complicatedly)
> go round all the threads and disable GCS before proceeding.  The main
> reason any sort of rebuild is required for most code is to add the ELF
> marking, there will be a compiler option to select it.  Static binaries
> should know if everything linked into them is GCS compatible and enable
> GCS if appropriate in their startup code.
> 
> The majority of the full distro work at this point is on the x86 side
> given the hardware availability, we are looking at that within Arm of
> course.  I'm not aware of any huge blockers we have encountered thus
> far.

Ok, so it sounds like you've started something then? How far have you got?

> It is fair to say that there's less active interest on the arm64 side
> since as you say the feature is quite a way off making it's way into
> hardware, though there are also long lead times on getting the full
> software stack to end users and kernel support becomes a blocker for
> the userspace stack.
>
> 
> > After the mess we had with BTI and mprotect(), I'm hesitant to merge
> > features like this without knowing that the ABI can stand real code.
> 
> The equivalent x86 feature is in current hardware[1], there has been
> some distro work (I believe one of the issues x86 has had is coping with
> a distro which shipped an early out of tree ABI, that experience has
> informed the current ABI which as the cover letter says we are following
> closely).  AIUI the biggest blocker on userspace work for x86 right now
> is landing the kernel side of things so that everyone else has a stable
> ABI to work from and don't need to carry out of tree patches, I've heard
> frustration expressed at the deployment being held up.  IIRC Fedora were
> on the leading edge in terms of active interest, they tend to be given
> that they're one of the most quickly iterating distros.  
> 
> This definitely does rely fairly heavily on the x86 experience for
> confidence in the ABI, and to be honest one of the big unknowns at this
> point is if you or Catalin will have opinions on how things are being
> done.

While we'd be daft not to look at what the x86 folks are doing, I don't
think we should rely solely on them to inform the design for arm64 when
it should be relatively straightforward to prototype the distro work on
the model. There's also no rush to land the kernel changes given that
GCS hardware doesn't exist.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2023-08-10  9:40 UTC|newest]

Thread overview: 192+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-31 13:43 [PATCH v3 00/36] arm64/gcs: Provide support for GCS in userspace Mark Brown
2023-07-31 13:43 ` Mark Brown
2023-07-31 13:43 ` Mark Brown
2023-07-31 13:43 ` [PATCH v3 01/36] prctl: arch-agnostic prctl for shadow stack Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43 ` [PATCH v3 02/36] arm64: Document boot requirements for Guarded Control Stacks Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43 ` [PATCH v3 03/36] arm64/gcs: Document the ABI " Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43 ` [PATCH v3 04/36] arm64/sysreg: Add new system registers for GCS Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43 ` [PATCH v3 05/36] arm64/sysreg: Add definitions for architected GCS caps Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43 ` [PATCH v3 06/36] arm64/gcs: Add manual encodings of GCS instructions Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43 ` [PATCH v3 07/36] arm64/gcs: Provide copy_to_user_gcs() Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43 ` [PATCH v3 08/36] arm64/cpufeature: Runtime detection of Guarded Control Stack (GCS) Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43 ` [PATCH v3 09/36] arm64/mm: Allocate PIE slots for EL0 guarded control stack Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43 ` [PATCH v3 10/36] mm: Define VM_SHADOW_STACK for arm64 when we support GCS Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-08-01 16:53   ` Mike Rapoport
2023-08-01 16:53     ` Mike Rapoport
2023-08-01 16:53     ` Mike Rapoport
2023-07-31 13:43 ` [PATCH v3 11/36] arm64/mm: Map pages for guarded control stack Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-08-01 17:02   ` Mike Rapoport
2023-08-01 17:02     ` Mike Rapoport
2023-08-01 17:02     ` Mike Rapoport
2023-08-01 19:05     ` Mark Brown
2023-08-01 19:05       ` Mark Brown
2023-08-01 19:05       ` Mark Brown
2023-07-31 13:43 ` [PATCH v3 12/36] KVM: arm64: Manage GCS registers for guests Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43 ` [PATCH v3 13/36] arm64/gcs: Allow GCS usage at EL0 and EL1 Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43 ` [PATCH v3 14/36] arm64/idreg: Add overrride for GCS Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43 ` [PATCH v3 15/36] arm64/hwcap: Add hwcap " Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43 ` [PATCH v3 16/36] arm64/traps: Handle GCS exceptions Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43 ` [PATCH v3 17/36] arm64/mm: Handle GCS data aborts Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43 ` [PATCH v3 18/36] arm64/gcs: Context switch GCS state for EL0 Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43 ` [PATCH v3 19/36] arm64/gcs: Allocate a new GCS for threads with GCS enabled Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43 ` [PATCH v3 20/36] arm64/gcs: Implement shadow stack prctl() interface Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43 ` [PATCH v3 21/36] arm64/mm: Implement map_shadow_stack() Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 15:56   ` Edgecombe, Rick P
2023-07-31 15:56     ` Edgecombe, Rick P
2023-07-31 15:56     ` Edgecombe, Rick P
2023-07-31 17:06     ` Mark Brown
2023-07-31 17:06       ` Mark Brown
2023-07-31 17:06       ` Mark Brown
2023-07-31 23:19       ` Edgecombe, Rick P
2023-07-31 23:19         ` Edgecombe, Rick P
2023-07-31 23:19         ` Edgecombe, Rick P
2023-08-01 14:01         ` Mark Brown
2023-08-01 14:01           ` Mark Brown
2023-08-01 14:01           ` Mark Brown
2023-08-01 17:07           ` Edgecombe, Rick P
2023-08-01 17:07             ` Edgecombe, Rick P
2023-08-01 17:07             ` Edgecombe, Rick P
2023-08-01 17:28             ` Mike Rapoport
2023-08-01 17:28               ` Mike Rapoport
2023-08-01 17:28               ` Mike Rapoport
2023-08-01 18:03               ` Mark Brown
2023-08-01 18:03                 ` Mark Brown
2023-08-01 18:03                 ` Mark Brown
2023-08-01 17:57             ` Mark Brown
2023-08-01 17:57               ` Mark Brown
2023-08-01 17:57               ` Mark Brown
2023-08-01 20:57               ` Edgecombe, Rick P
2023-08-01 20:57                 ` Edgecombe, Rick P
2023-08-01 20:57                 ` Edgecombe, Rick P
2023-08-02 16:27                 ` Mark Brown
2023-08-02 16:27                   ` Mark Brown
2023-08-02 16:27                   ` Mark Brown
2023-08-04 13:38                   ` Mark Brown
2023-08-04 13:38                     ` Mark Brown
2023-08-04 13:38                     ` Mark Brown
2023-08-04 16:43                     ` Edgecombe, Rick P
2023-08-04 16:43                       ` Edgecombe, Rick P
2023-08-04 16:43                       ` Edgecombe, Rick P
2023-08-04 17:10                       ` Mark Brown
2023-08-04 17:10                         ` Mark Brown
2023-08-04 17:10                         ` Mark Brown
2023-08-07 10:20   ` Szabolcs Nagy
2023-08-07 10:20     ` Szabolcs Nagy
2023-08-07 10:20     ` Szabolcs Nagy
2023-08-07 13:00     ` Mark Brown
2023-08-07 13:00       ` Mark Brown
2023-08-07 13:00       ` Mark Brown
2023-08-08  8:21       ` Szabolcs Nagy
2023-08-08  8:21         ` Szabolcs Nagy
2023-08-08  8:21         ` Szabolcs Nagy
2023-08-08 20:42         ` Mark Brown
2023-08-08 20:42           ` Mark Brown
2023-08-08 20:42           ` Mark Brown
2023-07-31 13:43 ` [PATCH v3 22/36] arm64/signal: Set up and restore the GCS context for signal handlers Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43 ` [PATCH v3 23/36] arm64/signal: Expose GCS state in signal frames Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43 ` [PATCH v3 24/36] arm64/ptrace: Expose GCS via ptrace and core files Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43 ` [PATCH v3 25/36] arm64: Add Kconfig for Guarded Control Stack (GCS) Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43 ` [PATCH v3 26/36] kselftest/arm64: Verify the GCS hwcap Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43 ` [PATCH v3 27/36] kselftest/arm64: Add GCS as a detected feature in the signal tests Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43 ` [PATCH v3 28/36] kselftest/arm64: Add framework support for GCS to signal handling tests Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43 ` [PATCH v3 29/36] kselftest/arm64: Allow signals tests to specify an expected si_code Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43 ` [PATCH v3 30/36] kselftest/arm64: Always run signals tests with GCS enabled Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43 ` [PATCH v3 31/36] kselftest/arm64: Add very basic GCS test program Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43 ` [PATCH v3 32/36] kselftest/arm64: Add a GCS test program built with the system libc Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43 ` [PATCH v3 33/36] kselftest/arm64: Add test coverage for GCS mode locking Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43 ` [PATCH v3 34/36] selftests/arm64: Add GCS signal tests Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43 ` [PATCH v3 35/36] kselftest/arm64: Add a GCS stress test Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43 ` [PATCH v3 36/36] kselftest/arm64: Enable GCS for the FP stress tests Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-07-31 13:43   ` Mark Brown
2023-08-01 14:13 ` [PATCH v3 00/36] arm64/gcs: Provide support for GCS in userspace Will Deacon
2023-08-01 14:13   ` Will Deacon
2023-08-01 14:13   ` Will Deacon
2023-08-01 15:09   ` Mark Brown
2023-08-01 15:09     ` Mark Brown
2023-08-01 15:09     ` Mark Brown
2023-08-08 10:27     ` Szabolcs Nagy
2023-08-08 10:27       ` Szabolcs Nagy
2023-08-08 10:27       ` Szabolcs Nagy
2023-08-08 13:38     ` Will Deacon
2023-08-08 13:38       ` Will Deacon
2023-08-08 13:38       ` Will Deacon
2023-08-08 20:25       ` Mark Brown
2023-08-08 20:25         ` Mark Brown
2023-08-08 20:25         ` Mark Brown
2023-08-10  9:40         ` Will Deacon [this message]
2023-08-10  9:40           ` Will Deacon
2023-08-10  9:40           ` Will Deacon
2023-08-10 16:05           ` Mark Brown
2023-08-10 16:05             ` Mark Brown
2023-08-10 16:05             ` Mark Brown

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230810094016.GA5365@willie-the-truck \
    --to=will@kernel.org \
    --cc=Szabolcs.Nagy@arm.com \
    --cc=akpm@linux-foundation.org \
    --cc=aou@eecs.berkeley.edu \
    --cc=ardb@kernel.org \
    --cc=arnd@arndb.de \
    --cc=broonie@kernel.org \
    --cc=catalin.marinas@arm.com \
    --cc=corbet@lwn.net \
    --cc=debug@rivosinc.com \
    --cc=ebiederm@xmission.com \
    --cc=hjl.tools@gmail.com \
    --cc=james.morse@arm.com \
    --cc=keescook@chromium.org \
    --cc=kvmarm@lists.linux.dev \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-kselftest@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=maz@kernel.org \
    --cc=oleg@redhat.com \
    --cc=oliver.upton@linux.dev \
    --cc=palmer@dabbelt.com \
    --cc=paul.walmsley@sifive.com \
    --cc=rick.p.edgecombe@intel.com \
    --cc=shuah@kernel.org \
    --cc=suzuki.poulose@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.