QEMU-Devel Archive on lore.kernel.org
 help / color / Atom feed
* [PATCH 0/2] accel/tcg: remove implied BQL from cpu_handle_interrupt/exception path
@ 2020-07-31 12:51 Robert Foley
  2020-07-31 12:51 ` [PATCH 1/2] hw/core: Add bql_interrupt flag to CPUClass Robert Foley
  2020-07-31 12:51 ` [PATCH 2/2] accel/tcg: interrupt/exception handling uses bql_interrupt flag Robert Foley
  0 siblings, 2 replies; 13+ messages in thread
From: Robert Foley @ 2020-07-31 12:51 UTC (permalink / raw)
  To: qemu-devel; +Cc: robert.foley, cota, pbonzini, peter.puhov, alex.bennee, rth

The purpose of this change is to set the groundwork
so that an arch could move towards removing
the BQL from the cpu_handle_interrupt/exception paths.

The BQL is a bottlneck in scaling to more cores.
And this cpu_handle_interrupt/exception path is one of
the key BQL users as measured by the QEMU sync profiling (qsp).

We have chosen to break up the process of removing
BQL from this path into two pieces:

1) Changes to the core/common functions
   of cpu_handle_interrupt/exception
   allowing a per-arch decision to hold BQL.
   The common case and the default is for the core code
   to hold the BQL (bql=true).
   This set of changes is handled in this patch.

2) Removing the BQL from the per-arch functions.
   1) makes it possible for an arch to set bql=false
   so that the common code does not hold the BQL
   across the cpu_handle_interrupt/exception call.
   This allows the arch to handle locking in this path
   We leave it up to the arch to make the change
   at the time that makes sense.

It is worth mentioning that we are working on per-arch changes
in line with 2), and plan to submit these.
In other words, we plan to set the groundwork with this
patch series and then will take advantage of it in later series.

This patch series is based on the per-CPU locks patch:
https://lists.gnu.org/archive/html/qemu-devel/2020-06/msg05314.html

Robert Foley (2):
  hw/core: Add bql_interrupt flag to CPUClass
  accel/tcg:  interrupt/exception handling uses bql_interrupt flag

 accel/tcg/cpu-exec.c  | 34 ++++++++++++++++++++++++++--------
 hw/core/cpu.c         |  1 +
 include/hw/core/cpu.h |  8 ++++++++
 3 files changed, 35 insertions(+), 8 deletions(-)

-- 
2.17.1



^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/2] hw/core: Add bql_interrupt flag to CPUClass
  2020-07-31 12:51 [PATCH 0/2] accel/tcg: remove implied BQL from cpu_handle_interrupt/exception path Robert Foley
@ 2020-07-31 12:51 ` Robert Foley
  2020-07-31 17:43   ` Eduardo Habkost
  2020-07-31 12:51 ` [PATCH 2/2] accel/tcg: interrupt/exception handling uses bql_interrupt flag Robert Foley
  1 sibling, 1 reply; 13+ messages in thread
From: Robert Foley @ 2020-07-31 12:51 UTC (permalink / raw)
  To: qemu-devel
  Cc: robert.foley, Eduardo Habkost, cota, pbonzini, peter.puhov,
	alex.bennee, rth

The new flag bql_interrupt, allows the CPUClass to
determine if the BQL should be held during calls to
cpu_exec_interrupt or do_interrupt.

This is being added in preparation for changes in
cpu_handle_interrupt, which will use this flag.

Signed-off-by: Robert Foley <robert.foley@linaro.org>
---
 hw/core/cpu.c         | 1 +
 include/hw/core/cpu.h | 8 ++++++++
 2 files changed, 9 insertions(+)

diff --git a/hw/core/cpu.c b/hw/core/cpu.c
index 8707ce2c34..7ab88caa97 100644
--- a/hw/core/cpu.c
+++ b/hw/core/cpu.c
@@ -425,6 +425,7 @@ static void cpu_class_init(ObjectClass *klass, void *data)
     k->cpu_exec_exit = cpu_common_noop;
     k->cpu_exec_interrupt = cpu_common_exec_interrupt;
     k->adjust_watchpoint_address = cpu_adjust_watchpoint_address;
+    k->bql_interrupt = true;
     set_bit(DEVICE_CATEGORY_CPU, dc->categories);
     dc->realize = cpu_common_realizefn;
     dc->unrealize = cpu_common_unrealizefn;
diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
index 6a2c77682f..d2c426ee5d 100644
--- a/include/hw/core/cpu.h
+++ b/include/hw/core/cpu.h
@@ -157,6 +157,8 @@ struct TranslationBlock;
  * @disas_set_info: Setup architecture specific components of disassembly info
  * @adjust_watchpoint_address: Perform a target-specific adjustment to an
  * address before attempting to match it against watchpoints.
+ * @bql_interrupt: Hold BQL while performing the cpu_exec_interrupt
+ *                 or do_interrupt call.
  *
  * Represents a CPU family or model.
  */
@@ -227,6 +229,7 @@ typedef struct CPUClass {
     /* Keep non-pointer data at the end to minimize holes.  */
     int gdb_num_core_regs;
     bool gdb_stop_before_watchpoint;
+    bool bql_interrupt;
 } CPUClass;
 
 /*
@@ -589,6 +592,11 @@ static inline void cpu_tb_jmp_cache_clear(CPUState *cpu)
     }
 }
 
+static inline void cpu_class_enable_bql_interrupt(CPUClass *cc)
+{
+    cc->bql_interrupt = true;
+}
+
 /**
  * qemu_tcg_mttcg_enabled:
  * Check whether we are running MultiThread TCG or not.
-- 
2.17.1



^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 2/2] accel/tcg: interrupt/exception handling uses bql_interrupt flag
  2020-07-31 12:51 [PATCH 0/2] accel/tcg: remove implied BQL from cpu_handle_interrupt/exception path Robert Foley
  2020-07-31 12:51 ` [PATCH 1/2] hw/core: Add bql_interrupt flag to CPUClass Robert Foley
@ 2020-07-31 12:51 ` Robert Foley
  2020-07-31 18:02   ` Paolo Bonzini
  1 sibling, 1 reply; 13+ messages in thread
From: Robert Foley @ 2020-07-31 12:51 UTC (permalink / raw)
  To: qemu-devel; +Cc: robert.foley, cota, pbonzini, peter.puhov, alex.bennee, rth

This change removes the implied BQL from the cpu_handle_interrupt,
and cpu_handle_exception paths. We can now select per-arch if
the BQL is needed or not by using the bql_interrupt flag.
By default, the core code holds the BQL.
One benefit of this change is that it leaves it up to the arch
to make the change to remove BQL when it makes sense.

Signed-off-by: Robert Foley <robert.foley@linaro.org>
---
 accel/tcg/cpu-exec.c | 34 ++++++++++++++++++++++++++--------
 1 file changed, 26 insertions(+), 8 deletions(-)

diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
index 80d0e649b2..cde27ee0bf 100644
--- a/accel/tcg/cpu-exec.c
+++ b/accel/tcg/cpu-exec.c
@@ -517,9 +517,13 @@ static inline bool cpu_handle_exception(CPUState *cpu, int *ret)
 #else
         if (replay_exception()) {
             CPUClass *cc = CPU_GET_CLASS(cpu);
-            qemu_mutex_lock_iothread();
+            if (cc->bql_interrupt) {
+                qemu_mutex_lock_iothread();
+            }
             cc->do_interrupt(cpu);
-            qemu_mutex_unlock_iothread();
+            if (cc->bql_interrupt) {
+                qemu_mutex_unlock_iothread();
+            }
             cpu->exception_index = -1;
 
             if (unlikely(cpu->singlestep_enabled)) {
@@ -558,7 +562,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
     if (unlikely(cpu_interrupt_request(cpu))) {
         int interrupt_request;
 
-        qemu_mutex_lock_iothread();
+        cpu_mutex_lock(cpu);
         interrupt_request = cpu_interrupt_request(cpu);
         if (unlikely(cpu->singlestep_enabled & SSTEP_NOIRQ)) {
             /* Mask out external interrupts for this step. */
@@ -567,7 +571,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
         if (interrupt_request & CPU_INTERRUPT_DEBUG) {
             cpu_reset_interrupt(cpu, CPU_INTERRUPT_DEBUG);
             cpu->exception_index = EXCP_DEBUG;
-            qemu_mutex_unlock_iothread();
+            cpu_mutex_unlock(cpu);
             return true;
         }
         if (replay_mode == REPLAY_MODE_PLAY && !replay_has_interrupt()) {
@@ -577,13 +581,15 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
             cpu_reset_interrupt(cpu, CPU_INTERRUPT_HALT);
             cpu_halted_set(cpu, 1);
             cpu->exception_index = EXCP_HLT;
-            qemu_mutex_unlock_iothread();
+            cpu_mutex_unlock(cpu);
             return true;
         }
 #if defined(TARGET_I386)
         else if (interrupt_request & CPU_INTERRUPT_INIT) {
             X86CPU *x86_cpu = X86_CPU(cpu);
             CPUArchState *env = &x86_cpu->env;
+            cpu_mutex_unlock(cpu);
+            qemu_mutex_lock_iothread();
             replay_interrupt();
             cpu_svm_check_intercept_param(env, SVM_EXIT_INIT, 0, 0);
             do_cpu_init(x86_cpu);
@@ -595,7 +601,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
         else if (interrupt_request & CPU_INTERRUPT_RESET) {
             replay_interrupt();
             cpu_reset(cpu);
-            qemu_mutex_unlock_iothread();
+            cpu_mutex_unlock(cpu);
             return true;
         }
 #endif
@@ -604,7 +610,15 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
            True when it is, and we should restart on a new TB,
            and via longjmp via cpu_loop_exit.  */
         else {
+            cpu_mutex_unlock(cpu);
+            if (cc->bql_interrupt) {
+                qemu_mutex_lock_iothread();
+            }
             if (cc->cpu_exec_interrupt(cpu, interrupt_request)) {
+                if (cc->bql_interrupt) {
+                    qemu_mutex_unlock_iothread();
+                }
+                cpu_mutex_lock(cpu);
                 replay_interrupt();
                 /*
                  * After processing the interrupt, ensure an EXCP_DEBUG is
@@ -614,6 +628,11 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
                 cpu->exception_index =
                     (cpu->singlestep_enabled ? EXCP_DEBUG : -1);
                 *last_tb = NULL;
+            } else {
+                if (cc->bql_interrupt) {
+                    qemu_mutex_unlock_iothread();
+                }
+                cpu_mutex_lock(cpu);
             }
             /* The target hook may have updated the 'cpu->interrupt_request';
              * reload the 'interrupt_request' value */
@@ -627,7 +646,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
         }
 
         /* If we exit via cpu_loop_exit/longjmp it is reset in cpu_exec */
-        qemu_mutex_unlock_iothread();
+        cpu_mutex_unlock(cpu);
     }
 
     /* Finally, check if we need to exit to the main loop.  */
@@ -691,7 +710,6 @@ static inline void cpu_loop_exec_tb(CPUState *cpu, TranslationBlock *tb,
     }
 #endif
 }
-
 /* main execution loop */
 
 int cpu_exec(CPUState *cpu)
-- 
2.17.1



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] hw/core: Add bql_interrupt flag to CPUClass
  2020-07-31 12:51 ` [PATCH 1/2] hw/core: Add bql_interrupt flag to CPUClass Robert Foley
@ 2020-07-31 17:43   ` Eduardo Habkost
  2020-07-31 19:14     ` Robert Foley
  0 siblings, 1 reply; 13+ messages in thread
From: Eduardo Habkost @ 2020-07-31 17:43 UTC (permalink / raw)
  To: Robert Foley; +Cc: qemu-devel, cota, pbonzini, peter.puhov, alex.bennee, rth

On Fri, Jul 31, 2020 at 08:51:26AM -0400, Robert Foley wrote:
> The new flag bql_interrupt, allows the CPUClass to
> determine if the BQL should be held during calls to
> cpu_exec_interrupt or do_interrupt.
> 
> This is being added in preparation for changes in
> cpu_handle_interrupt, which will use this flag.
> 
> Signed-off-by: Robert Foley <robert.foley@linaro.org>
> ---
>  hw/core/cpu.c         | 1 +
>  include/hw/core/cpu.h | 8 ++++++++
>  2 files changed, 9 insertions(+)
> 
> diff --git a/hw/core/cpu.c b/hw/core/cpu.c
> index 8707ce2c34..7ab88caa97 100644
> --- a/hw/core/cpu.c
> +++ b/hw/core/cpu.c
> @@ -425,6 +425,7 @@ static void cpu_class_init(ObjectClass *klass, void *data)
>      k->cpu_exec_exit = cpu_common_noop;
>      k->cpu_exec_interrupt = cpu_common_exec_interrupt;
>      k->adjust_watchpoint_address = cpu_adjust_watchpoint_address;
> +    k->bql_interrupt = true;
>      set_bit(DEVICE_CATEGORY_CPU, dc->categories);
>      dc->realize = cpu_common_realizefn;
>      dc->unrealize = cpu_common_unrealizefn;
> diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
> index 6a2c77682f..d2c426ee5d 100644
> --- a/include/hw/core/cpu.h
> +++ b/include/hw/core/cpu.h
> @@ -157,6 +157,8 @@ struct TranslationBlock;
>   * @disas_set_info: Setup architecture specific components of disassembly info
>   * @adjust_watchpoint_address: Perform a target-specific adjustment to an
>   * address before attempting to match it against watchpoints.
> + * @bql_interrupt: Hold BQL while performing the cpu_exec_interrupt
> + *                 or do_interrupt call.
>   *
>   * Represents a CPU family or model.
>   */
> @@ -227,6 +229,7 @@ typedef struct CPUClass {
>      /* Keep non-pointer data at the end to minimize holes.  */
>      int gdb_num_core_regs;
>      bool gdb_stop_before_watchpoint;
> +    bool bql_interrupt;
>  } CPUClass;
>  
>  /*
> @@ -589,6 +592,11 @@ static inline void cpu_tb_jmp_cache_clear(CPUState *cpu)
>      }
>  }
>  
> +static inline void cpu_class_enable_bql_interrupt(CPUClass *cc)
> +{
> +    cc->bql_interrupt = true;
> +}

Class data is not supposed to change outside class_init.  Why do
you need this function?  I don't see it being used anywhere in
this series.

-- 
Eduardo



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] accel/tcg: interrupt/exception handling uses bql_interrupt flag
  2020-07-31 12:51 ` [PATCH 2/2] accel/tcg: interrupt/exception handling uses bql_interrupt flag Robert Foley
@ 2020-07-31 18:02   ` Paolo Bonzini
  2020-07-31 20:09     ` Robert Foley
  0 siblings, 1 reply; 13+ messages in thread
From: Paolo Bonzini @ 2020-07-31 18:02 UTC (permalink / raw)
  To: Robert Foley, qemu-devel; +Cc: peter.puhov, cota, alex.bennee, rth

On 31/07/20 14:51, Robert Foley wrote:
> This change removes the implied BQL from the cpu_handle_interrupt,
> and cpu_handle_exception paths. We can now select per-arch if
> the BQL is needed or not by using the bql_interrupt flag.
> By default, the core code holds the BQL.
> One benefit of this change is that it leaves it up to the arch
> to make the change to remove BQL when it makes sense.
> 
> Signed-off-by: Robert Foley <robert.foley@linaro.org>

No, please just modify all implementation to do lock/unlock.  It's a
simpler patch than this on.

Paolo

> ---
>  accel/tcg/cpu-exec.c | 34 ++++++++++++++++++++++++++--------
>  1 file changed, 26 insertions(+), 8 deletions(-)
> 
> diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
> index 80d0e649b2..cde27ee0bf 100644
> --- a/accel/tcg/cpu-exec.c
> +++ b/accel/tcg/cpu-exec.c
> @@ -517,9 +517,13 @@ static inline bool cpu_handle_exception(CPUState *cpu, int *ret)
>  #else
>          if (replay_exception()) {
>              CPUClass *cc = CPU_GET_CLASS(cpu);
> -            qemu_mutex_lock_iothread();
> +            if (cc->bql_interrupt) {
> +                qemu_mutex_lock_iothread();
> +            }
>              cc->do_interrupt(cpu);
> -            qemu_mutex_unlock_iothread();
> +            if (cc->bql_interrupt) {
> +                qemu_mutex_unlock_iothread();
> +            }
>              cpu->exception_index = -1;
>  
>              if (unlikely(cpu->singlestep_enabled)) {
> @@ -558,7 +562,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
>      if (unlikely(cpu_interrupt_request(cpu))) {
>          int interrupt_request;
>  
> -        qemu_mutex_lock_iothread();
> +        cpu_mutex_lock(cpu);
>          interrupt_request = cpu_interrupt_request(cpu);
>          if (unlikely(cpu->singlestep_enabled & SSTEP_NOIRQ)) {
>              /* Mask out external interrupts for this step. */
> @@ -567,7 +571,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
>          if (interrupt_request & CPU_INTERRUPT_DEBUG) {
>              cpu_reset_interrupt(cpu, CPU_INTERRUPT_DEBUG);
>              cpu->exception_index = EXCP_DEBUG;
> -            qemu_mutex_unlock_iothread();
> +            cpu_mutex_unlock(cpu);
>              return true;
>          }
>          if (replay_mode == REPLAY_MODE_PLAY && !replay_has_interrupt()) {
> @@ -577,13 +581,15 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
>              cpu_reset_interrupt(cpu, CPU_INTERRUPT_HALT);
>              cpu_halted_set(cpu, 1);
>              cpu->exception_index = EXCP_HLT;
> -            qemu_mutex_unlock_iothread();
> +            cpu_mutex_unlock(cpu);
>              return true;
>          }
>  #if defined(TARGET_I386)
>          else if (interrupt_request & CPU_INTERRUPT_INIT) {
>              X86CPU *x86_cpu = X86_CPU(cpu);
>              CPUArchState *env = &x86_cpu->env;
> +            cpu_mutex_unlock(cpu);
> +            qemu_mutex_lock_iothread();
>              replay_interrupt();
>              cpu_svm_check_intercept_param(env, SVM_EXIT_INIT, 0, 0);
>              do_cpu_init(x86_cpu);
> @@ -595,7 +601,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
>          else if (interrupt_request & CPU_INTERRUPT_RESET) {
>              replay_interrupt();
>              cpu_reset(cpu);
> -            qemu_mutex_unlock_iothread();
> +            cpu_mutex_unlock(cpu);
>              return true;
>          }
>  #endif
> @@ -604,7 +610,15 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
>             True when it is, and we should restart on a new TB,
>             and via longjmp via cpu_loop_exit.  */
>          else {
> +            cpu_mutex_unlock(cpu);
> +            if (cc->bql_interrupt) {
> +                qemu_mutex_lock_iothread();
> +            }
>              if (cc->cpu_exec_interrupt(cpu, interrupt_request)) {
> +                if (cc->bql_interrupt) {
> +                    qemu_mutex_unlock_iothread();
> +                }
> +                cpu_mutex_lock(cpu);
>                  replay_interrupt();
>                  /*
>                   * After processing the interrupt, ensure an EXCP_DEBUG is
> @@ -614,6 +628,11 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
>                  cpu->exception_index =
>                      (cpu->singlestep_enabled ? EXCP_DEBUG : -1);
>                  *last_tb = NULL;
> +            } else {
> +                if (cc->bql_interrupt) {
> +                    qemu_mutex_unlock_iothread();
> +                }
> +                cpu_mutex_lock(cpu);
>              }
>              /* The target hook may have updated the 'cpu->interrupt_request';
>               * reload the 'interrupt_request' value */
> @@ -627,7 +646,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
>          }
>  
>          /* If we exit via cpu_loop_exit/longjmp it is reset in cpu_exec */
> -        qemu_mutex_unlock_iothread();
> +        cpu_mutex_unlock(cpu);
>      }
>  
>      /* Finally, check if we need to exit to the main loop.  */
> @@ -691,7 +710,6 @@ static inline void cpu_loop_exec_tb(CPUState *cpu, TranslationBlock *tb,
>      }
>  #endif
>  }
> -
>  /* main execution loop */
>  
>  int cpu_exec(CPUState *cpu)
> 



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] hw/core: Add bql_interrupt flag to CPUClass
  2020-07-31 17:43   ` Eduardo Habkost
@ 2020-07-31 19:14     ` Robert Foley
  2020-07-31 19:24       ` Eduardo Habkost
  0 siblings, 1 reply; 13+ messages in thread
From: Robert Foley @ 2020-07-31 19:14 UTC (permalink / raw)
  To: Eduardo Habkost
  Cc: QEMU Developers, Emilio G. Cota, Paolo Bonzini, Peter Puhov,
	Alex Bennée, Richard Henderson

On Fri, 31 Jul 2020 at 13:44, Eduardo Habkost <ehabkost@redhat.com> wrote:
> >
> > +static inline void cpu_class_disable_bql_interrupt(CPUClass *cc)
> > +{
> > +    cc->bql_interrupt = false;
> > +}
>
> Class data is not supposed to change outside class_init.  Why do
> you need this function?  I don't see it being used anywhere in
> this series.

This function was to be called from changes in a later patch series
that depend on these changes.  BTW,  I added a correction above,
it should be disable, not enable.  The idea is that it is initialized to true,
but then the per arch changes would use this call at init time to set
it to false
as needed.

We can remove this function from this series and add it in later when
it gets used,
it might make things more clear.

Thanks,
-Rob

> --
> Eduardo
>


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] hw/core: Add bql_interrupt flag to CPUClass
  2020-07-31 19:14     ` Robert Foley
@ 2020-07-31 19:24       ` Eduardo Habkost
  2020-08-02 16:05         ` Alex Bennée
  0 siblings, 1 reply; 13+ messages in thread
From: Eduardo Habkost @ 2020-07-31 19:24 UTC (permalink / raw)
  To: Robert Foley
  Cc: QEMU Developers, Emilio G. Cota, Paolo Bonzini, Peter Puhov,
	Alex Bennée, Richard Henderson

On Fri, Jul 31, 2020 at 03:14:02PM -0400, Robert Foley wrote:
> On Fri, 31 Jul 2020 at 13:44, Eduardo Habkost <ehabkost@redhat.com> wrote:
> > >
> > > +static inline void cpu_class_disable_bql_interrupt(CPUClass *cc)
> > > +{
> > > +    cc->bql_interrupt = false;
> > > +}
> >
> > Class data is not supposed to change outside class_init.  Why do
> > you need this function?  I don't see it being used anywhere in
> > this series.
> 
> This function was to be called from changes in a later patch series
> that depend on these changes.  BTW,  I added a correction above,
> it should be disable, not enable.  The idea is that it is initialized to true,
> but then the per arch changes would use this call at init time to set
> it to false
> as needed.

If you plan to call it from class_init, I don't think you need a
wrapper.  You can simply set cc->bql_interrupt=false directly
inside arch-specific class_init functions.

If you plan to call it from somewhere else, then maybe the field
doesn't belong to CPUClass.

> 
> We can remove this function from this series and add it in later when
> it gets used,
> it might make things more clear.

Makes sense to me.

-- 
Eduardo



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] accel/tcg: interrupt/exception handling uses bql_interrupt flag
  2020-07-31 18:02   ` Paolo Bonzini
@ 2020-07-31 20:09     ` Robert Foley
  2020-07-31 20:21       ` Paolo Bonzini
  0 siblings, 1 reply; 13+ messages in thread
From: Robert Foley @ 2020-07-31 20:09 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Peter Puhov, Emilio G. Cota, Alex Bennée, QEMU Developers,
	Richard Henderson

On Fri, 31 Jul 2020 at 14:02, Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> On 31/07/20 14:51, Robert Foley wrote:
> > This change removes the implied BQL from the cpu_handle_interrupt,
> > and cpu_handle_exception paths. We can now select per-arch if
> > the BQL is needed or not by using the bql_interrupt flag.
> > By default, the core code holds the BQL.
> > One benefit of this change is that it leaves it up to the arch
> > to make the change to remove BQL when it makes sense.
> >
> > Signed-off-by: Robert Foley <robert.foley@linaro.org>
>
> No, please just modify all implementation to do lock/unlock.  It's a
> simpler patch than this on.

Sure, we will update the patch based on this.

To clarify, the suggestion here is to remove the bql_interrupt flag
that we added and change all the per-arch interrupt callback code to
do the lock/unlock of the BQL?  So for example change
x86_cpu_exec_interrupt, and arm_cpu_exec_interrupt, etc to lock/unlock BQL?

Thanks,
-Rob


>
> Paolo
>
> > ---
> >  accel/tcg/cpu-exec.c | 34 ++++++++++++++++++++++++++--------
> >  1 file changed, 26 insertions(+), 8 deletions(-)
> >
> > diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
> > index 80d0e649b2..cde27ee0bf 100644
> > --- a/accel/tcg/cpu-exec.c
> > +++ b/accel/tcg/cpu-exec.c
> > @@ -517,9 +517,13 @@ static inline bool cpu_handle_exception(CPUState *cpu, int *ret)
> >  #else
> >          if (replay_exception()) {
> >              CPUClass *cc = CPU_GET_CLASS(cpu);
> > -            qemu_mutex_lock_iothread();
> > +            if (cc->bql_interrupt) {
> > +                qemu_mutex_lock_iothread();
> > +            }
> >              cc->do_interrupt(cpu);
> > -            qemu_mutex_unlock_iothread();
> > +            if (cc->bql_interrupt) {
> > +                qemu_mutex_unlock_iothread();
> > +            }
> >              cpu->exception_index = -1;
> >
> >              if (unlikely(cpu->singlestep_enabled)) {
> > @@ -558,7 +562,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
> >      if (unlikely(cpu_interrupt_request(cpu))) {
> >          int interrupt_request;
> >
> > -        qemu_mutex_lock_iothread();
> > +        cpu_mutex_lock(cpu);
> >          interrupt_request = cpu_interrupt_request(cpu);
> >          if (unlikely(cpu->singlestep_enabled & SSTEP_NOIRQ)) {
> >              /* Mask out external interrupts for this step. */
> > @@ -567,7 +571,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
> >          if (interrupt_request & CPU_INTERRUPT_DEBUG) {
> >              cpu_reset_interrupt(cpu, CPU_INTERRUPT_DEBUG);
> >              cpu->exception_index = EXCP_DEBUG;
> > -            qemu_mutex_unlock_iothread();
> > +            cpu_mutex_unlock(cpu);
> >              return true;
> >          }
> >          if (replay_mode == REPLAY_MODE_PLAY && !replay_has_interrupt()) {
> > @@ -577,13 +581,15 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
> >              cpu_reset_interrupt(cpu, CPU_INTERRUPT_HALT);
> >              cpu_halted_set(cpu, 1);
> >              cpu->exception_index = EXCP_HLT;
> > -            qemu_mutex_unlock_iothread();
> > +            cpu_mutex_unlock(cpu);
> >              return true;
> >          }
> >  #if defined(TARGET_I386)
> >          else if (interrupt_request & CPU_INTERRUPT_INIT) {
> >              X86CPU *x86_cpu = X86_CPU(cpu);
> >              CPUArchState *env = &x86_cpu->env;
> > +            cpu_mutex_unlock(cpu);
> > +            qemu_mutex_lock_iothread();
> >              replay_interrupt();
> >              cpu_svm_check_intercept_param(env, SVM_EXIT_INIT, 0, 0);
> >              do_cpu_init(x86_cpu);
> > @@ -595,7 +601,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
> >          else if (interrupt_request & CPU_INTERRUPT_RESET) {
> >              replay_interrupt();
> >              cpu_reset(cpu);
> > -            qemu_mutex_unlock_iothread();
> > +            cpu_mutex_unlock(cpu);
> >              return true;
> >          }
> >  #endif
> > @@ -604,7 +610,15 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
> >             True when it is, and we should restart on a new TB,
> >             and via longjmp via cpu_loop_exit.  */
> >          else {
> > +            cpu_mutex_unlock(cpu);
> > +            if (cc->bql_interrupt) {
> > +                qemu_mutex_lock_iothread();
> > +            }
> >              if (cc->cpu_exec_interrupt(cpu, interrupt_request)) {
> > +                if (cc->bql_interrupt) {
> > +                    qemu_mutex_unlock_iothread();
> > +                }
> > +                cpu_mutex_lock(cpu);
> >                  replay_interrupt();
> >                  /*
> >                   * After processing the interrupt, ensure an EXCP_DEBUG is
> > @@ -614,6 +628,11 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
> >                  cpu->exception_index =
> >                      (cpu->singlestep_enabled ? EXCP_DEBUG : -1);
> >                  *last_tb = NULL;
> > +            } else {
> > +                if (cc->bql_interrupt) {
> > +                    qemu_mutex_unlock_iothread();
> > +                }
> > +                cpu_mutex_lock(cpu);
> >              }
> >              /* The target hook may have updated the 'cpu->interrupt_request';
> >               * reload the 'interrupt_request' value */
> > @@ -627,7 +646,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
> >          }
> >
> >          /* If we exit via cpu_loop_exit/longjmp it is reset in cpu_exec */
> > -        qemu_mutex_unlock_iothread();
> > +        cpu_mutex_unlock(cpu);
> >      }
> >
> >      /* Finally, check if we need to exit to the main loop.  */
> > @@ -691,7 +710,6 @@ static inline void cpu_loop_exec_tb(CPUState *cpu, TranslationBlock *tb,
> >      }
> >  #endif
> >  }
> > -
> >  /* main execution loop */
> >
> >  int cpu_exec(CPUState *cpu)
> >
>


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] accel/tcg: interrupt/exception handling uses bql_interrupt flag
  2020-07-31 20:09     ` Robert Foley
@ 2020-07-31 20:21       ` Paolo Bonzini
  2020-08-02 16:09         ` Alex Bennée
  0 siblings, 1 reply; 13+ messages in thread
From: Paolo Bonzini @ 2020-07-31 20:21 UTC (permalink / raw)
  To: Robert Foley
  Cc: Peter Puhov, Emilio G. Cota, Alex Bennée, QEMU Developers,
	Richard Henderson


[-- Attachment #1: Type: text/plain, Size: 6365 bytes --]

Yes, that is correct. It's more work but also more maintainable.

Thanks,

Paolo

Il ven 31 lug 2020, 22:09 Robert Foley <robert.foley@linaro.org> ha scritto:

> On Fri, 31 Jul 2020 at 14:02, Paolo Bonzini <pbonzini@redhat.com> wrote:
> >
> > On 31/07/20 14:51, Robert Foley wrote:
> > > This change removes the implied BQL from the cpu_handle_interrupt,
> > > and cpu_handle_exception paths. We can now select per-arch if
> > > the BQL is needed or not by using the bql_interrupt flag.
> > > By default, the core code holds the BQL.
> > > One benefit of this change is that it leaves it up to the arch
> > > to make the change to remove BQL when it makes sense.
> > >
> > > Signed-off-by: Robert Foley <robert.foley@linaro.org>
> >
> > No, please just modify all implementation to do lock/unlock.  It's a
> > simpler patch than this on.
>
> Sure, we will update the patch based on this.
>
> To clarify, the suggestion here is to remove the bql_interrupt flag
> that we added and change all the per-arch interrupt callback code to
> do the lock/unlock of the BQL?  So for example change
> x86_cpu_exec_interrupt, and arm_cpu_exec_interrupt, etc to lock/unlock BQL?
>
> Thanks,
> -Rob
>
>
> >
> > Paolo
> >
> > > ---
> > >  accel/tcg/cpu-exec.c | 34 ++++++++++++++++++++++++++--------
> > >  1 file changed, 26 insertions(+), 8 deletions(-)
> > >
> > > diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
> > > index 80d0e649b2..cde27ee0bf 100644
> > > --- a/accel/tcg/cpu-exec.c
> > > +++ b/accel/tcg/cpu-exec.c
> > > @@ -517,9 +517,13 @@ static inline bool cpu_handle_exception(CPUState
> *cpu, int *ret)
> > >  #else
> > >          if (replay_exception()) {
> > >              CPUClass *cc = CPU_GET_CLASS(cpu);
> > > -            qemu_mutex_lock_iothread();
> > > +            if (cc->bql_interrupt) {
> > > +                qemu_mutex_lock_iothread();
> > > +            }
> > >              cc->do_interrupt(cpu);
> > > -            qemu_mutex_unlock_iothread();
> > > +            if (cc->bql_interrupt) {
> > > +                qemu_mutex_unlock_iothread();
> > > +            }
> > >              cpu->exception_index = -1;
> > >
> > >              if (unlikely(cpu->singlestep_enabled)) {
> > > @@ -558,7 +562,7 @@ static inline bool cpu_handle_interrupt(CPUState
> *cpu,
> > >      if (unlikely(cpu_interrupt_request(cpu))) {
> > >          int interrupt_request;
> > >
> > > -        qemu_mutex_lock_iothread();
> > > +        cpu_mutex_lock(cpu);
> > >          interrupt_request = cpu_interrupt_request(cpu);
> > >          if (unlikely(cpu->singlestep_enabled & SSTEP_NOIRQ)) {
> > >              /* Mask out external interrupts for this step. */
> > > @@ -567,7 +571,7 @@ static inline bool cpu_handle_interrupt(CPUState
> *cpu,
> > >          if (interrupt_request & CPU_INTERRUPT_DEBUG) {
> > >              cpu_reset_interrupt(cpu, CPU_INTERRUPT_DEBUG);
> > >              cpu->exception_index = EXCP_DEBUG;
> > > -            qemu_mutex_unlock_iothread();
> > > +            cpu_mutex_unlock(cpu);
> > >              return true;
> > >          }
> > >          if (replay_mode == REPLAY_MODE_PLAY &&
> !replay_has_interrupt()) {
> > > @@ -577,13 +581,15 @@ static inline bool cpu_handle_interrupt(CPUState
> *cpu,
> > >              cpu_reset_interrupt(cpu, CPU_INTERRUPT_HALT);
> > >              cpu_halted_set(cpu, 1);
> > >              cpu->exception_index = EXCP_HLT;
> > > -            qemu_mutex_unlock_iothread();
> > > +            cpu_mutex_unlock(cpu);
> > >              return true;
> > >          }
> > >  #if defined(TARGET_I386)
> > >          else if (interrupt_request & CPU_INTERRUPT_INIT) {
> > >              X86CPU *x86_cpu = X86_CPU(cpu);
> > >              CPUArchState *env = &x86_cpu->env;
> > > +            cpu_mutex_unlock(cpu);
> > > +            qemu_mutex_lock_iothread();
> > >              replay_interrupt();
> > >              cpu_svm_check_intercept_param(env, SVM_EXIT_INIT, 0, 0);
> > >              do_cpu_init(x86_cpu);
> > > @@ -595,7 +601,7 @@ static inline bool cpu_handle_interrupt(CPUState
> *cpu,
> > >          else if (interrupt_request & CPU_INTERRUPT_RESET) {
> > >              replay_interrupt();
> > >              cpu_reset(cpu);
> > > -            qemu_mutex_unlock_iothread();
> > > +            cpu_mutex_unlock(cpu);
> > >              return true;
> > >          }
> > >  #endif
> > > @@ -604,7 +610,15 @@ static inline bool cpu_handle_interrupt(CPUState
> *cpu,
> > >             True when it is, and we should restart on a new TB,
> > >             and via longjmp via cpu_loop_exit.  */
> > >          else {
> > > +            cpu_mutex_unlock(cpu);
> > > +            if (cc->bql_interrupt) {
> > > +                qemu_mutex_lock_iothread();
> > > +            }
> > >              if (cc->cpu_exec_interrupt(cpu, interrupt_request)) {
> > > +                if (cc->bql_interrupt) {
> > > +                    qemu_mutex_unlock_iothread();
> > > +                }
> > > +                cpu_mutex_lock(cpu);
> > >                  replay_interrupt();
> > >                  /*
> > >                   * After processing the interrupt, ensure an
> EXCP_DEBUG is
> > > @@ -614,6 +628,11 @@ static inline bool cpu_handle_interrupt(CPUState
> *cpu,
> > >                  cpu->exception_index =
> > >                      (cpu->singlestep_enabled ? EXCP_DEBUG : -1);
> > >                  *last_tb = NULL;
> > > +            } else {
> > > +                if (cc->bql_interrupt) {
> > > +                    qemu_mutex_unlock_iothread();
> > > +                }
> > > +                cpu_mutex_lock(cpu);
> > >              }
> > >              /* The target hook may have updated the
> 'cpu->interrupt_request';
> > >               * reload the 'interrupt_request' value */
> > > @@ -627,7 +646,7 @@ static inline bool cpu_handle_interrupt(CPUState
> *cpu,
> > >          }
> > >
> > >          /* If we exit via cpu_loop_exit/longjmp it is reset in
> cpu_exec */
> > > -        qemu_mutex_unlock_iothread();
> > > +        cpu_mutex_unlock(cpu);
> > >      }
> > >
> > >      /* Finally, check if we need to exit to the main loop.  */
> > > @@ -691,7 +710,6 @@ static inline void cpu_loop_exec_tb(CPUState *cpu,
> TranslationBlock *tb,
> > >      }
> > >  #endif
> > >  }
> > > -
> > >  /* main execution loop */
> > >
> > >  int cpu_exec(CPUState *cpu)
> > >
> >
>
>

[-- Attachment #2: Type: text/html, Size: 8709 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] hw/core: Add bql_interrupt flag to CPUClass
  2020-07-31 19:24       ` Eduardo Habkost
@ 2020-08-02 16:05         ` Alex Bennée
  2020-08-04 20:36           ` Eduardo Habkost
  0 siblings, 1 reply; 13+ messages in thread
From: Alex Bennée @ 2020-08-02 16:05 UTC (permalink / raw)
  To: Eduardo Habkost
  Cc: Robert Foley, QEMU Developers, Emilio G. Cota, Paolo Bonzini,
	Peter Puhov, Richard Henderson


Eduardo Habkost <ehabkost@redhat.com> writes:

> On Fri, Jul 31, 2020 at 03:14:02PM -0400, Robert Foley wrote:
>> On Fri, 31 Jul 2020 at 13:44, Eduardo Habkost <ehabkost@redhat.com> wrote:
>> > >
>> > > +static inline void cpu_class_disable_bql_interrupt(CPUClass *cc)
>> > > +{
>> > > +    cc->bql_interrupt = false;
>> > > +}
>> >
>> > Class data is not supposed to change outside class_init.  Why do
>> > you need this function?  I don't see it being used anywhere in
>> > this series.
>> 
>> This function was to be called from changes in a later patch series
>> that depend on these changes.  BTW,  I added a correction above,
>> it should be disable, not enable.  The idea is that it is initialized to true,
>> but then the per arch changes would use this call at init time to set
>> it to false
>> as needed.
>
> If you plan to call it from class_init, I don't think you need a
> wrapper.  You can simply set cc->bql_interrupt=false directly
> inside arch-specific class_init functions.

We just need to be careful of the ordering so the base class init goes
first. Is that always the case?

>
> If you plan to call it from somewhere else, then maybe the field
> doesn't belong to CPUClass.
>
>> 
>> We can remove this function from this series and add it in later when
>> it gets used,
>> it might make things more clear.
>
> Makes sense to me.


-- 
Alex Bennée


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] accel/tcg: interrupt/exception handling uses bql_interrupt flag
  2020-07-31 20:21       ` Paolo Bonzini
@ 2020-08-02 16:09         ` Alex Bennée
  2020-08-03  7:11           ` Paolo Bonzini
  0 siblings, 1 reply; 13+ messages in thread
From: Alex Bennée @ 2020-08-02 16:09 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Peter Puhov, Richard Henderson, Emilio G. Cota, Robert Foley,
	QEMU Developers


Paolo Bonzini <pbonzini@redhat.com> writes:

> Yes, that is correct. It's more work but also more maintainable.

I originally suggested keeping the locking choice up in the main loop
because I suspect most guests will stick to BQL IRQs until they find it
is a bottle neck.

cpu_handle_interrupt/exception have never been my favourite functions
but perhaps there is a way to re-factor and clean them up to keep this
in core code?

I do worry that hiding BQL activity in the guest code makes it harder to
reason about what locks are currently held when reading the code.

>
> Thanks,
>
> Paolo
>
> Il ven 31 lug 2020, 22:09 Robert Foley <robert.foley@linaro.org> ha scritto:
>
>> On Fri, 31 Jul 2020 at 14:02, Paolo Bonzini <pbonzini@redhat.com> wrote:
>> >
>> > On 31/07/20 14:51, Robert Foley wrote:
>> > > This change removes the implied BQL from the cpu_handle_interrupt,
>> > > and cpu_handle_exception paths. We can now select per-arch if
>> > > the BQL is needed or not by using the bql_interrupt flag.
>> > > By default, the core code holds the BQL.
>> > > One benefit of this change is that it leaves it up to the arch
>> > > to make the change to remove BQL when it makes sense.
>> > >
>> > > Signed-off-by: Robert Foley <robert.foley@linaro.org>
>> >
>> > No, please just modify all implementation to do lock/unlock.  It's a
>> > simpler patch than this on.
>>
>> Sure, we will update the patch based on this.
>>
>> To clarify, the suggestion here is to remove the bql_interrupt flag
>> that we added and change all the per-arch interrupt callback code to
>> do the lock/unlock of the BQL?  So for example change
>> x86_cpu_exec_interrupt, and arm_cpu_exec_interrupt, etc to lock/unlock BQL?
>>
>> Thanks,
>> -Rob
>>
>>
>> >
>> > Paolo
>> >
>> > > ---
>> > >  accel/tcg/cpu-exec.c | 34 ++++++++++++++++++++++++++--------
>> > >  1 file changed, 26 insertions(+), 8 deletions(-)
>> > >
>> > > diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
>> > > index 80d0e649b2..cde27ee0bf 100644
>> > > --- a/accel/tcg/cpu-exec.c
>> > > +++ b/accel/tcg/cpu-exec.c
>> > > @@ -517,9 +517,13 @@ static inline bool cpu_handle_exception(CPUState
>> *cpu, int *ret)
>> > >  #else
>> > >          if (replay_exception()) {
>> > >              CPUClass *cc = CPU_GET_CLASS(cpu);
>> > > -            qemu_mutex_lock_iothread();
>> > > +            if (cc->bql_interrupt) {
>> > > +                qemu_mutex_lock_iothread();
>> > > +            }
>> > >              cc->do_interrupt(cpu);
>> > > -            qemu_mutex_unlock_iothread();
>> > > +            if (cc->bql_interrupt) {
>> > > +                qemu_mutex_unlock_iothread();
>> > > +            }
>> > >              cpu->exception_index = -1;
>> > >
>> > >              if (unlikely(cpu->singlestep_enabled)) {
>> > > @@ -558,7 +562,7 @@ static inline bool cpu_handle_interrupt(CPUState
>> *cpu,
>> > >      if (unlikely(cpu_interrupt_request(cpu))) {
>> > >          int interrupt_request;
>> > >
>> > > -        qemu_mutex_lock_iothread();
>> > > +        cpu_mutex_lock(cpu);
>> > >          interrupt_request = cpu_interrupt_request(cpu);
>> > >          if (unlikely(cpu->singlestep_enabled & SSTEP_NOIRQ)) {
>> > >              /* Mask out external interrupts for this step. */
>> > > @@ -567,7 +571,7 @@ static inline bool cpu_handle_interrupt(CPUState
>> *cpu,
>> > >          if (interrupt_request & CPU_INTERRUPT_DEBUG) {
>> > >              cpu_reset_interrupt(cpu, CPU_INTERRUPT_DEBUG);
>> > >              cpu->exception_index = EXCP_DEBUG;
>> > > -            qemu_mutex_unlock_iothread();
>> > > +            cpu_mutex_unlock(cpu);
>> > >              return true;
>> > >          }
>> > >          if (replay_mode == REPLAY_MODE_PLAY &&
>> !replay_has_interrupt()) {
>> > > @@ -577,13 +581,15 @@ static inline bool cpu_handle_interrupt(CPUState
>> *cpu,
>> > >              cpu_reset_interrupt(cpu, CPU_INTERRUPT_HALT);
>> > >              cpu_halted_set(cpu, 1);
>> > >              cpu->exception_index = EXCP_HLT;
>> > > -            qemu_mutex_unlock_iothread();
>> > > +            cpu_mutex_unlock(cpu);
>> > >              return true;
>> > >          }
>> > >  #if defined(TARGET_I386)
>> > >          else if (interrupt_request & CPU_INTERRUPT_INIT) {
>> > >              X86CPU *x86_cpu = X86_CPU(cpu);
>> > >              CPUArchState *env = &x86_cpu->env;
>> > > +            cpu_mutex_unlock(cpu);
>> > > +            qemu_mutex_lock_iothread();
>> > >              replay_interrupt();
>> > >              cpu_svm_check_intercept_param(env, SVM_EXIT_INIT, 0, 0);
>> > >              do_cpu_init(x86_cpu);
>> > > @@ -595,7 +601,7 @@ static inline bool cpu_handle_interrupt(CPUState
>> *cpu,
>> > >          else if (interrupt_request & CPU_INTERRUPT_RESET) {
>> > >              replay_interrupt();
>> > >              cpu_reset(cpu);
>> > > -            qemu_mutex_unlock_iothread();
>> > > +            cpu_mutex_unlock(cpu);
>> > >              return true;
>> > >          }
>> > >  #endif
>> > > @@ -604,7 +610,15 @@ static inline bool cpu_handle_interrupt(CPUState
>> *cpu,
>> > >             True when it is, and we should restart on a new TB,
>> > >             and via longjmp via cpu_loop_exit.  */
>> > >          else {
>> > > +            cpu_mutex_unlock(cpu);
>> > > +            if (cc->bql_interrupt) {
>> > > +                qemu_mutex_lock_iothread();
>> > > +            }
>> > >              if (cc->cpu_exec_interrupt(cpu, interrupt_request)) {
>> > > +                if (cc->bql_interrupt) {
>> > > +                    qemu_mutex_unlock_iothread();
>> > > +                }
>> > > +                cpu_mutex_lock(cpu);
>> > >                  replay_interrupt();
>> > >                  /*
>> > >                   * After processing the interrupt, ensure an
>> EXCP_DEBUG is
>> > > @@ -614,6 +628,11 @@ static inline bool cpu_handle_interrupt(CPUState
>> *cpu,
>> > >                  cpu->exception_index =
>> > >                      (cpu->singlestep_enabled ? EXCP_DEBUG : -1);
>> > >                  *last_tb = NULL;
>> > > +            } else {
>> > > +                if (cc->bql_interrupt) {
>> > > +                    qemu_mutex_unlock_iothread();
>> > > +                }
>> > > +                cpu_mutex_lock(cpu);
>> > >              }
>> > >              /* The target hook may have updated the
>> 'cpu->interrupt_request';
>> > >               * reload the 'interrupt_request' value */
>> > > @@ -627,7 +646,7 @@ static inline bool cpu_handle_interrupt(CPUState
>> *cpu,
>> > >          }
>> > >
>> > >          /* If we exit via cpu_loop_exit/longjmp it is reset in
>> cpu_exec */
>> > > -        qemu_mutex_unlock_iothread();
>> > > +        cpu_mutex_unlock(cpu);
>> > >      }
>> > >
>> > >      /* Finally, check if we need to exit to the main loop.  */
>> > > @@ -691,7 +710,6 @@ static inline void cpu_loop_exec_tb(CPUState *cpu,
>> TranslationBlock *tb,
>> > >      }
>> > >  #endif
>> > >  }
>> > > -
>> > >  /* main execution loop */
>> > >
>> > >  int cpu_exec(CPUState *cpu)
>> > >
>> >
>>
>>


-- 
Alex Bennée


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] accel/tcg: interrupt/exception handling uses bql_interrupt flag
  2020-08-02 16:09         ` Alex Bennée
@ 2020-08-03  7:11           ` Paolo Bonzini
  0 siblings, 0 replies; 13+ messages in thread
From: Paolo Bonzini @ 2020-08-03  7:11 UTC (permalink / raw)
  To: Alex Bennée
  Cc: Peter Puhov, Richard Henderson, Emilio G. Cota, Robert Foley,
	QEMU Developers

On 02/08/20 18:09, Alex Bennée wrote:
> I originally suggested keeping the locking choice up in the main loop
> because I suspect most guests will stick to BQL IRQs until they find it
> is a bottle neck.

True, but the main loop is already complicated and conditional locking
is pretty much impossible to reason on and (in the future) do static
analysis on.

Paolo



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] hw/core: Add bql_interrupt flag to CPUClass
  2020-08-02 16:05         ` Alex Bennée
@ 2020-08-04 20:36           ` Eduardo Habkost
  0 siblings, 0 replies; 13+ messages in thread
From: Eduardo Habkost @ 2020-08-04 20:36 UTC (permalink / raw)
  To: Alex Bennée
  Cc: Robert Foley, QEMU Developers, Emilio G. Cota, Paolo Bonzini,
	Peter Puhov, Richard Henderson

On Sun, Aug 02, 2020 at 05:05:04PM +0100, Alex Bennée wrote:
> 
> Eduardo Habkost <ehabkost@redhat.com> writes:
> 
> > On Fri, Jul 31, 2020 at 03:14:02PM -0400, Robert Foley wrote:
> >> On Fri, 31 Jul 2020 at 13:44, Eduardo Habkost <ehabkost@redhat.com> wrote:
> >> > >
> >> > > +static inline void cpu_class_disable_bql_interrupt(CPUClass *cc)
> >> > > +{
> >> > > +    cc->bql_interrupt = false;
> >> > > +}
> >> >
> >> > Class data is not supposed to change outside class_init.  Why do
> >> > you need this function?  I don't see it being used anywhere in
> >> > this series.
> >> 
> >> This function was to be called from changes in a later patch series
> >> that depend on these changes.  BTW,  I added a correction above,
> >> it should be disable, not enable.  The idea is that it is initialized to true,
> >> but then the per arch changes would use this call at init time to set
> >> it to false
> >> as needed.
> >
> > If you plan to call it from class_init, I don't think you need a
> > wrapper.  You can simply set cc->bql_interrupt=false directly
> > inside arch-specific class_init functions.
> 
> We just need to be careful of the ordering so the base class init goes
> first. Is that always the case?

Absolutely.  Subclasses overriding class data previously
initialized by the base class is a very common pattern in QOM
code.

-- 
Eduardo



^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, back to index

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-31 12:51 [PATCH 0/2] accel/tcg: remove implied BQL from cpu_handle_interrupt/exception path Robert Foley
2020-07-31 12:51 ` [PATCH 1/2] hw/core: Add bql_interrupt flag to CPUClass Robert Foley
2020-07-31 17:43   ` Eduardo Habkost
2020-07-31 19:14     ` Robert Foley
2020-07-31 19:24       ` Eduardo Habkost
2020-08-02 16:05         ` Alex Bennée
2020-08-04 20:36           ` Eduardo Habkost
2020-07-31 12:51 ` [PATCH 2/2] accel/tcg: interrupt/exception handling uses bql_interrupt flag Robert Foley
2020-07-31 18:02   ` Paolo Bonzini
2020-07-31 20:09     ` Robert Foley
2020-07-31 20:21       ` Paolo Bonzini
2020-08-02 16:09         ` Alex Bennée
2020-08-03  7:11           ` Paolo Bonzini

QEMU-Devel Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/qemu-devel/0 qemu-devel/git/0.git
	git clone --mirror https://lore.kernel.org/qemu-devel/1 qemu-devel/git/1.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 qemu-devel qemu-devel/ https://lore.kernel.org/qemu-devel \
		qemu-devel@nongnu.org
	public-inbox-index qemu-devel

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.nongnu.qemu-devel


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git