All of lore.kernel.org
 help / color / mirror / Atom feed
* Proposal for consistent Kconfig usage by the hypervisor build system
@ 2023-01-12 16:52 Jan Beulich
  2023-01-30 12:27 ` Julien Grall
  2023-02-02 15:51 ` Andrew Cooper
  0 siblings, 2 replies; 9+ messages in thread
From: Jan Beulich @ 2023-01-12 16:52 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, George Dunlap, Julien Grall, Stefano Stabellini, Wei Liu

(re-sending with REST on Cc, as requested at the community call)

At present we use a mix of Makefile and Kconfig driven capability checks for
tool chain components involved in the building of the hypervisor.  What approach
is used where is in some part a result of the relatively late introduction of
Kconfig into the build system, but in other places also simply a result of
different taste of different contributors.  Switching to a uniform model,
however, has drawbacks as well:
 - A uniformly Makefile based model is not in line with Linux, where Kconfig is
   actually coming from (at least as far as we're concerned; there may be
   earlier origins).  This model is also being disliked by some community
   members.
 - A uniformly Kconfig based model suffers from a weakness of Kconfig in that
   dependent options are silently turned off when dependencies aren't met.  This
   has the undesirable effect that a carefully crafted .config may be silently
   converted to one with features turned off which were intended to be on.
   While this could be deemed expected behavior when a dependency is also an
   option which was selected by the person configuring the hypervisor, it
   certainly can be surprising when the dependency is an auto-detected tool
   chain capability.  Furthermore there's no automatic re-running of kconfig if
   any part of the tool chain changed.  (Despite knowing of this in principle,
   I've still been hit by this more than once in the past: If one rebuilds a
   tree which wasn't touched for a while, and if some time has already passed
   since the updating to the newer component, one may not immediately make the
   connection.)

Therefore I'd like to propose that we use an intermediate model: Detected tool
chain capabilities (and alike) may only be used to control optimization (i.e.
including their use as dependencies for optimization controls) and to establish
the defaults of options.  They may not be used to control functionality, i.e.
they may in particular not be specified as a dependency of an option controlling
functionality.  This way unless defaults were overridden things will build, and
non-default settings will be honored (albeit potentially resulting in a build
failure).

For example

config AS_VMX
	def_bool $(as-instr,vmcall)

would be okay (as long as we have fallback code to deal with the case of too
old an assembler; raising the baseline there is a separate topic), but instead
of what we have currently

config XEN_SHSTK
	bool "Supervisor Shadow Stacks"
	default HAS_AS_CET_SS

would be the way to go.

It was additionally suggested that, for a better user experience, unmet
dependencies which are known to result in build failures (which at times may be
hard to associate back with the original cause) would be re-checked by Makefile
based logic, leading to an early build failure with a comprehensible error
message.  Personally I'd prefer this to be just warnings (first and foremost to
avoid failing the build just because of a broken or stale check), but I can see
that they might be overlooked when there's a lot of other output.  In any event
we may want to try to figure an approach which would make sufficiently sure that
Makefile and Kconfig checks don't go out of sync.

Jan


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Proposal for consistent Kconfig usage by the hypervisor build system
  2023-01-12 16:52 Proposal for consistent Kconfig usage by the hypervisor build system Jan Beulich
@ 2023-01-30 12:27 ` Julien Grall
  2023-01-30 13:54   ` Jan Beulich
  2023-02-02 15:51 ` Andrew Cooper
  1 sibling, 1 reply; 9+ messages in thread
From: Julien Grall @ 2023-01-30 12:27 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Andrew Cooper, George Dunlap, Stefano Stabellini, Wei Liu

Hi Jan,

Apologies for the late reply.

On 12/01/2023 16:52, Jan Beulich wrote:
> (re-sending with REST on Cc, as requested at the community call)
> 
> At present we use a mix of Makefile and Kconfig driven capability checks for
> tool chain components involved in the building of the hypervisor.  What approach
> is used where is in some part a result of the relatively late introduction of
> Kconfig into the build system, but in other places also simply a result of
> different taste of different contributors.  Switching to a uniform model,
> however, has drawbacks as well:
>   - A uniformly Makefile based model is not in line with Linux, where Kconfig is
>     actually coming from (at least as far as we're concerned; there may be
>     earlier origins).  This model is also being disliked by some community
>     members.
>   - A uniformly Kconfig based model suffers from a weakness of Kconfig in that
>     dependent options are silently turned off when dependencies aren't met.  This
>     has the undesirable effect that a carefully crafted .config may be silently
>     converted to one with features turned off which were intended to be on.
>     While this could be deemed expected behavior when a dependency is also an
>     option which was selected by the person configuring the hypervisor, it
>     certainly can be surprising when the dependency is an auto-detected tool
>     chain capability.  Furthermore there's no automatic re-running of kconfig if
>     any part of the tool chain changed.  (Despite knowing of this in principle,
>     I've still been hit by this more than once in the past: If one rebuilds a
>     tree which wasn't touched for a while, and if some time has already passed
>     since the updating to the newer component, one may not immediately make the
>     connection.)
> 
> Therefore I'd like to propose that we use an intermediate model: Detected tool
> chain capabilities (and alike) may only be used to control optimization (i.e.
> including their use as dependencies for optimization controls) and to establish
> the defaults of options.  They may not be used to control functionality, i.e.
> they may in particular not be specified as a dependency of an option controlling
> functionality.  This way unless defaults were overridden things will build, and
> non-default settings will be honored (albeit potentially resulting in a build
> failure).
> 
> For example
> 
> config AS_VMX
> 	def_bool $(as-instr,vmcall)
> 
> would be okay (as long as we have fallback code to deal with the case of too
> old an assembler; raising the baseline there is a separate topic), but instead
> of what we have currently
> 
> config XEN_SHSTK
> 	bool "Supervisor Shadow Stacks"
> 	default HAS_AS_CET_SS
> 
> would be the way to go.

I think your intermediate model makes sense.

> 
> It was additionally suggested that, for a better user experience, unmet
> dependencies which are known to result in build failures (which at times may be
> hard to associate back with the original cause) would be re-checked by Makefile
> based logic, leading to an early build failure with a comprehensible error
> message.  Personally I'd prefer this to be just warnings (first and foremost to
> avoid failing the build just because of a broken or stale check), but I can see
> that they might be overlooked when there's a lot of other output. 

If we wanted the Makefile to check the available features, then I would 
prefer an early error rather than warning. That said...

> In any event
> we may want to try to figure an approach which would make sufficiently sure that
> Makefile and Kconfig checks don't go out of sync.

... this is indeed a concern. How incomprehensible would the error be if 
we don't check it in the Makefile?

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Proposal for consistent Kconfig usage by the hypervisor build system
  2023-01-30 12:27 ` Julien Grall
@ 2023-01-30 13:54   ` Jan Beulich
  0 siblings, 0 replies; 9+ messages in thread
From: Jan Beulich @ 2023-01-30 13:54 UTC (permalink / raw)
  To: Julien Grall
  Cc: Andrew Cooper, George Dunlap, Stefano Stabellini, Wei Liu, xen-devel

On 30.01.2023 13:27, Julien Grall wrote:
> On 12/01/2023 16:52, Jan Beulich wrote:
>> It was additionally suggested that, for a better user experience, unmet
>> dependencies which are known to result in build failures (which at times may be
>> hard to associate back with the original cause) would be re-checked by Makefile
>> based logic, leading to an early build failure with a comprehensible error
>> message.  Personally I'd prefer this to be just warnings (first and foremost to
>> avoid failing the build just because of a broken or stale check), but I can see
>> that they might be overlooked when there's a lot of other output. 
> 
> If we wanted the Makefile to check the available features, then I would 
> prefer an early error rather than warning. That said...
> 
>> In any event
>> we may want to try to figure an approach which would make sufficiently sure that
>> Makefile and Kconfig checks don't go out of sync.
> 
> ... this is indeed a concern. How incomprehensible would the error be if 
> we don't check it in the Makefile?

That'll depend on the particular feature / functionality, and might range from
very obvious and easy to very well obfuscated.

Jan


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Proposal for consistent Kconfig usage by the hypervisor build system
  2023-01-12 16:52 Proposal for consistent Kconfig usage by the hypervisor build system Jan Beulich
  2023-01-30 12:27 ` Julien Grall
@ 2023-02-02 15:51 ` Andrew Cooper
  2023-02-09 13:43   ` Jan Beulich
  1 sibling, 1 reply; 9+ messages in thread
From: Andrew Cooper @ 2023-02-02 15:51 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Andrew Cooper, George Dunlap, Julien Grall, Stefano Stabellini, Wei Liu

On 12/01/2023 4:52 pm, Jan Beulich wrote:
> (re-sending with REST on Cc, as requested at the community call)
>
> At present we use a mix of Makefile and Kconfig driven capability checks for
> tool chain components involved in the building of the hypervisor.  What approach
> is used where is in some part a result of the relatively late introduction of
> Kconfig into the build system, but in other places also simply a result of
> different taste of different contributors.  Switching to a uniform model,
> however, has drawbacks as well:
>  - A uniformly Makefile based model is not in line with Linux, where Kconfig is
>    actually coming from (at least as far as we're concerned; there may be
>    earlier origins).  This model is also being disliked by some community
>    members.
>  - A uniformly Kconfig based model suffers from a weakness of Kconfig in that
>    dependent options are silently turned off when dependencies aren't met.

This is deliberate behaviour of Kconfig, and not related to toolchain
dependences.

Exactly the same thing happens for a change that edits a regular
dependency, or inserts/removes an option.

>   This
>    has the undesirable effect that a carefully crafted .config may be silently
>    converted to one with features turned off which were intended to be on.

The Makefile model does exactly the same.  It *will* check feature
availability of the toolchain, and *will* modify code generation as a
result.

The programmer just doesn't get to see this because there's no written
record of it happening when it's not encoded in Kconfig.

>    While this could be deemed expected behavior when a dependency is also an
>    option which was selected by the person configuring the hypervisor, it
>    certainly can be surprising when the dependency is an auto-detected tool
>    chain capability.  Furthermore there's no automatic re-running of kconfig if
>    any part of the tool chain changed.  (Despite knowing of this in principle,
>    I've still been hit by this more than once in the past: If one rebuilds a
>    tree which wasn't touched for a while, and if some time has already passed
>    since the updating to the newer component, one may not immediately make the
>    connection.)
>
> Therefore I'd like to propose that we use an intermediate model: Detected tool
> chain capabilities (and alike) may only be used to control optimization (i.e.
> including their use as dependencies for optimization controls) and to establish
> the defaults of options.  They may not be used to control functionality, i.e.
> they may in particular not be specified as a dependency of an option controlling
> functionality.  This way unless defaults were overridden things will build, and
> non-default settings will be honored (albeit potentially resulting in a build
> failure).
>
> For example
>
> config AS_VMX
> 	def_bool $(as-instr,vmcall)
>
> would be okay (as long as we have fallback code to deal with the case of too
> old an assembler; raising the baseline there is a separate topic), but instead
> of what we have currently
>
> config XEN_SHSTK
> 	bool "Supervisor Shadow Stacks"
> 	default HAS_AS_CET_SS

Yes.  This is very intentional, and is AFAICT an example of something
which cannot be encoded in the existing Makefile scheme.

There is a tonne of stuff we can only do with proper toolchain support. 
CET (both shstk, and ibt) are examples, and plenty more to come, where
playing around with .byte in older toolchains simply will not work.

There are also plenty of cases where it would be technically possible,
but the cost of doing so is so large that it's not going to happen.

> would be the way to go.
>
> It was additionally suggested that, for a better user experience, unmet
> dependencies which are known to result in build failures (which at times may be
> hard to associate back with the original cause) would be re-checked by Makefile
> based logic, leading to an early build failure with a comprehensible error
> message.  Personally I'd prefer this to be just warnings (first and foremost to
> avoid failing the build just because of a broken or stale check), but I can see
> that they might be overlooked when there's a lot of other output.  In any event
> we may want to try to figure an approach which would make sufficiently sure that
> Makefile and Kconfig checks don't go out of sync.

This is a brand new feature request.  But it looks like you're trying to
reinvent ./configure without using ./configure.

~Andrew


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Proposal for consistent Kconfig usage by the hypervisor build system
  2023-02-02 15:51 ` Andrew Cooper
@ 2023-02-09 13:43   ` Jan Beulich
  2023-02-09 16:02     ` George Dunlap
  0 siblings, 1 reply; 9+ messages in thread
From: Jan Beulich @ 2023-02-09 13:43 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: George Dunlap, Julien Grall, Stefano Stabellini, Wei Liu, xen-devel

On 02.02.2023 16:51, Andrew Cooper wrote:
> On 12/01/2023 4:52 pm, Jan Beulich wrote:
>> (re-sending with REST on Cc, as requested at the community call)
>>
>> At present we use a mix of Makefile and Kconfig driven capability checks for
>> tool chain components involved in the building of the hypervisor.  What approach
>> is used where is in some part a result of the relatively late introduction of
>> Kconfig into the build system, but in other places also simply a result of
>> different taste of different contributors.  Switching to a uniform model,
>> however, has drawbacks as well:
>>  - A uniformly Makefile based model is not in line with Linux, where Kconfig is
>>    actually coming from (at least as far as we're concerned; there may be
>>    earlier origins).  This model is also being disliked by some community
>>    members.
>>  - A uniformly Kconfig based model suffers from a weakness of Kconfig in that
>>    dependent options are silently turned off when dependencies aren't met.
> 
> This is deliberate behaviour of Kconfig, and not related to toolchain
> dependences.
> 
> Exactly the same thing happens for a change that edits a regular
> dependency, or inserts/removes an option.

That's sufficiently different: Options depending on one another of course
can have that effect. But we're talking about depending on something
outside of the control of the person doing the configury.

In fact there's a use case where tool chain dependencies get in the way:
Our kernel repo holds (often multiple per arch) pre-built .config files.
Prior to Linux introducing tool chain dependencies, anyone could update
them (without actually trying to build the result right away) using
whatever compiler they had to hand, even cross-arch (as the kconfig
step only requires a host compiler, not also a target [cross] one). The
resulting configs would then be consumed by the build system, using the
proper environment for the targeted code stream, i.e. including that
code stream's tool chain. If the build resulted in a change to .config,
this would be flagged as an error. When the first few tool chain
dependencies appeared upstream, they tries to hack around that. I have
to admit that I don't know what the state there is nowadays.

>>   This
>>    has the undesirable effect that a carefully crafted .config may be silently
>>    converted to one with features turned off which were intended to be on.
> 
> The Makefile model does exactly the same.  It *will* check feature
> availability of the toolchain, and *will* modify code generation as a
> result.
> 
> The programmer just doesn't get to see this because there's no written
> record of it happening when it's not encoded in Kconfig.

I don't think I'm following you here. The Makefile model in particular
won't turn off and CONFIG_* values.

>>    While this could be deemed expected behavior when a dependency is also an
>>    option which was selected by the person configuring the hypervisor, it
>>    certainly can be surprising when the dependency is an auto-detected tool
>>    chain capability.  Furthermore there's no automatic re-running of kconfig if
>>    any part of the tool chain changed.  (Despite knowing of this in principle,
>>    I've still been hit by this more than once in the past: If one rebuilds a
>>    tree which wasn't touched for a while, and if some time has already passed
>>    since the updating to the newer component, one may not immediately make the
>>    connection.)
>>
>> Therefore I'd like to propose that we use an intermediate model: Detected tool
>> chain capabilities (and alike) may only be used to control optimization (i.e.
>> including their use as dependencies for optimization controls) and to establish
>> the defaults of options.  They may not be used to control functionality, i.e.
>> they may in particular not be specified as a dependency of an option controlling
>> functionality.  This way unless defaults were overridden things will build, and
>> non-default settings will be honored (albeit potentially resulting in a build
>> failure).
>>
>> For example
>>
>> config AS_VMX
>> 	def_bool $(as-instr,vmcall)
>>
>> would be okay (as long as we have fallback code to deal with the case of too
>> old an assembler; raising the baseline there is a separate topic), but instead
>> of what we have currently
>>
>> config XEN_SHSTK
>> 	bool "Supervisor Shadow Stacks"
>> 	default HAS_AS_CET_SS
> 
> Yes.  This is very intentional, and is AFAICT an example of something
> which cannot be encoded in the existing Makefile scheme.
> 
> There is a tonne of stuff we can only do with proper toolchain support. 
> CET (both shstk, and ibt) are examples, and plenty more to come, where
> playing around with .byte in older toolchains simply will not work.
> 
> There are also plenty of cases where it would be technically possible,
> but the cost of doing so is so large that it's not going to happen.

Right. Hence there'll still be cases where we simply will fail the build
vs ones where we merely create less optimal code.

>> It was additionally suggested that, for a better user experience, unmet
>> dependencies which are known to result in build failures (which at times may be
>> hard to associate back with the original cause) would be re-checked by Makefile
>> based logic, leading to an early build failure with a comprehensible error
>> message.  Personally I'd prefer this to be just warnings (first and foremost to
>> avoid failing the build just because of a broken or stale check), but I can see
>> that they might be overlooked when there's a lot of other output.  In any event
>> we may want to try to figure an approach which would make sufficiently sure that
>> Makefile and Kconfig checks don't go out of sync.
> 
> This is a brand new feature request.

Not really, no. That's an aspect that should have been considered before
switching any tool chain capability detection to the Kconfig model.

>  But it looks like you're trying to reinvent ./configure without using ./configure.

How that? I'm specifically trying to stay away from pinning down what can
or cannot be built depending on the capabilities of the tool chain used
for the *config step of the build process (which, as said above, may
deliberately happen in an entirely different environment).

Anyway, I have a prototype for the two CET controls mostly ready now, so
I guess we'll continue the discussion there once I've submitted that one.

Jan


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Proposal for consistent Kconfig usage by the hypervisor build system
  2023-02-09 13:43   ` Jan Beulich
@ 2023-02-09 16:02     ` George Dunlap
  2023-02-09 16:08       ` Jan Beulich
  0 siblings, 1 reply; 9+ messages in thread
From: George Dunlap @ 2023-02-09 16:02 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, George Dunlap, Julien Grall, Stefano Stabellini,
	Wei Liu, xen-devel

[-- Attachment #1: Type: text/plain, Size: 505 bytes --]

On Thu, Feb 9, 2023 at 1:43 PM Jan Beulich <jbeulich@suse.com> wrote:

> On 02.02.2023 16:51, Andrew Cooper wrote:
> > On 12/01/2023 4:52 pm, Jan Beulich wrote:
>
> Anyway, I have a prototype for the two CET controls mostly ready now, so
> I guess we'll continue the discussion there once I've submitted that one.
>

One thing that it occured to me will be important: `make randconfig` must
continue to work somehow.  I'm not sure how Anthony's patch to deal with
`checkpolicy` deals with that.

 -George

[-- Attachment #2: Type: text/html, Size: 890 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Proposal for consistent Kconfig usage by the hypervisor build system
  2023-02-09 16:02     ` George Dunlap
@ 2023-02-09 16:08       ` Jan Beulich
  2023-02-09 17:59         ` Anthony PERARD
  0 siblings, 1 reply; 9+ messages in thread
From: Jan Beulich @ 2023-02-09 16:08 UTC (permalink / raw)
  To: George Dunlap
  Cc: Andrew Cooper, George Dunlap, Julien Grall, Stefano Stabellini,
	Wei Liu, xen-devel, Anthony Perard

On 09.02.2023 17:02, George Dunlap wrote:
> On Thu, Feb 9, 2023 at 1:43 PM Jan Beulich <jbeulich@suse.com> wrote:
> 
>> On 02.02.2023 16:51, Andrew Cooper wrote:
>>> On 12/01/2023 4:52 pm, Jan Beulich wrote:
>>
>> Anyway, I have a prototype for the two CET controls mostly ready now, so
>> I guess we'll continue the discussion there once I've submitted that one.
>>
> 
> One thing that it occured to me will be important: `make randconfig` must
> continue to work somehow.

Hmm, good point. For now I've merely added a TBD to the patch I've already
sent. Right now I can't see how that might be possible without the user
specifying which options cannot be turned on due to tool chain dependencies
(by way of a seeding .config, I suppose).

>  I'm not sure how Anthony's patch to deal with `checkpolicy` deals with that.

Given his remark on the community call I did actually try to locate it,
assuming that it had at least "policy" in the title. But I couldn't find
anything in my mailbox.

Jan


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Proposal for consistent Kconfig usage by the hypervisor build system
  2023-02-09 16:08       ` Jan Beulich
@ 2023-02-09 17:59         ` Anthony PERARD
  0 siblings, 0 replies; 9+ messages in thread
From: Anthony PERARD @ 2023-02-09 17:59 UTC (permalink / raw)
  To: Jan Beulich
  Cc: George Dunlap, Andrew Cooper, George Dunlap, Julien Grall,
	Stefano Stabellini, Wei Liu, xen-devel

On Thu, Feb 09, 2023 at 05:08:10PM +0100, Jan Beulich wrote:
> On 09.02.2023 17:02, George Dunlap wrote:
> > On Thu, Feb 9, 2023 at 1:43 PM Jan Beulich <jbeulich@suse.com> wrote:
> > 
> >> On 02.02.2023 16:51, Andrew Cooper wrote:
> >>> On 12/01/2023 4:52 pm, Jan Beulich wrote:
> >>
> >> Anyway, I have a prototype for the two CET controls mostly ready now, so
> >> I guess we'll continue the discussion there once I've submitted that one.
> >>
> > 
> > One thing that it occured to me will be important: `make randconfig` must
> > continue to work somehow.
> 
> Hmm, good point. For now I've merely added a TBD to the patch I've already
> sent. Right now I can't see how that might be possible without the user
> specifying which options cannot be turned on due to tool chain dependencies
> (by way of a seeding .config, I suppose).
> 
> >  I'm not sure how Anthony's patch to deal with `checkpolicy` deals with that.
> 
> Given his remark on the community call I did actually try to locate it,
> assuming that it had at least "policy" in the title. But I couldn't find
> anything in my mailbox.

0b000a2ce813 ("xen: rework `checkpolicy` detection when using "randconfig"")

It boil down to adding "CONFIG_XSM_FLASK_POLICY=n" via
KCONFIG_ALLCONFIG option. The macro $(filechk_kconfig_allconfig,) does
the job.

-- 
Anthony PERARD


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Proposal for consistent Kconfig usage by the hypervisor build system
@ 2022-09-29 13:28 Jan Beulich
  0 siblings, 0 replies; 9+ messages in thread
From: Jan Beulich @ 2022-09-29 13:28 UTC (permalink / raw)
  To: xen-devel

At present we use a mix of Makefile and Kconfig driven capability checks for
tool chain components involved in the building of the hypervisor.  What approach
is used where is in some part a result of the relatively late introduction of
Kconfig into the build system, but in other places also simply a result of
different taste of different contributors.  Switching to a uniform model,
however, has drawbacks as well:
 - A uniformly Makefile based model is not in line with Linux, where Kconfig is
   actually coming from (at least as far as we're concerned; there may be
   earlier origins).  This model is also being disliked by some community
   members.
 - A uniformly Kconfig based model suffers from a weakness of Kconfig in that
   dependent options are silently turned off when dependencies aren't met.  This
   has the undesirable effect that a carefully crafted .config may be silently
   converted to one with features turned off which were intended to be on.
   While this could be deemed expected behavior when a dependency is also an
   option which was selected by the person configuring the hypervisor, it
   certainly can be surprising when the dependency is an auto-detected tool
   chain capability.  Furthermore there's no automatic re-running of kconfig if
   any part of the tool chain changed.  (Despite knowing of this in principle,
   I've still been hit by this more than once in the past: If one rebuilds a
   tree which wasn't touched for a while, and if some time has already passed
   since the updating to the newer component, one may not immediately make the
   connection.)

Therefore I'd like to propose that we use an intermediate model: Detected tool
chain capabilities (and alike) may only be used to control optimization (i.e.
including their use as dependencies for optimization controls) and to establish
the defaults of options.  They may not be used to control functionality, i.e.
they may in particular not be specified as a dependency of an option controlling
functionality.  This way unless defaults were overridden things will build, and
non-default settings will be honored (albeit potentially resulting in a build
failure).

For example

config AS_VMX
	def_bool $(as-instr,vmcall)

would be okay (as long as we have fallback code to deal with the case of too
old an assembler; raising the baseline there is a separate topic), but instead
of what we have currently

config XEN_SHSTK
	bool "Supervisor Shadow Stacks"
	default HAS_AS_CET_SS

would be the way to go.

It was additionally suggested that, for a better user experience, unmet
dependencies which are known to result in build failures (which at times may be
hard to associate back with the original cause) would be re-checked by Makefile
based logic, leading to an early build failure with a comprehensible error
message.  Personally I'd prefer this to be just warnings (first and foremost to
avoid failing the build just because of a broken or stale check), but I can see
that they might be overlooked when there's a lot of other output.  In any event
we may want to try to figure an approach which would make sufficiently sure that
Makefile and Kconfig checks don't go out of sync.

Jan


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2023-02-09 18:00 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-12 16:52 Proposal for consistent Kconfig usage by the hypervisor build system Jan Beulich
2023-01-30 12:27 ` Julien Grall
2023-01-30 13:54   ` Jan Beulich
2023-02-02 15:51 ` Andrew Cooper
2023-02-09 13:43   ` Jan Beulich
2023-02-09 16:02     ` George Dunlap
2023-02-09 16:08       ` Jan Beulich
2023-02-09 17:59         ` Anthony PERARD
  -- strict thread matches above, loose matches on Subject: below --
2022-09-29 13:28 Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.