All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Alex Bennée" <alex.bennee@linaro.org>
To: Sergey Fedorov <serge.fdrv@gmail.com>
Cc: Sergey Fedorov <sergey.fedorov@linaro.org>,
	qemu-devel@nongnu.org, mttcg@listserver.greensocs.com,
	fred.konrad@greensocs.com, a.rigo@virtualopensystems.com,
	cota@braap.org, bobby.prani@gmail.com, rth@twiddle.net,
	patches@linaro.org, mark.burton@greensocs.com,
	pbonzini@redhat.com, jan.kiszka@siemens.com,
	peter.maydell@linaro.org, claudio.fontana@huawei.com,
	Peter Crosthwaite <crosthwaite.peter@gmail.com>
Subject: Re: [Qemu-devel] [PATCH v3 04/11] tcg: Prepare safe access to tb_flushed out of tb_lock
Date: Thu, 14 Jul 2016 14:12:06 +0100	[thread overview]
Message-ID: <87inw84b4p.fsf@linaro.org> (raw)
In-Reply-To: <57878BCA.5010801@gmail.com>


Sergey Fedorov <serge.fdrv@gmail.com> writes:

> On 14/07/16 15:45, Alex Bennée wrote:
>> Sergey Fedorov <sergey.fedorov@linaro.org> writes:
>>
>>> From: Sergey Fedorov <serge.fdrv@gmail.com>
>>>
>>> Ensure atomicity of CPU's 'tb_flushed' access for future translation
>>> block lookup out of 'tb_lock'.
>>>
>>> This field can only be touched from another thread by tb_flush() in user
>>> mode emulation. So the only access to be atomic is:
>>>  * a single write in tb_flush();
>>>  * reads/writes out of 'tb_lock'.
>> It might worth mentioning the barrier here.
>
> Do you mean atomic_set() vs. atomic_mb_set()?

Yes.

>
>>
>>> In future, before enabling MTTCG in system mode, tb_flush() must be safe
>>> and this field becomes unnecessary.
>>>
>>> Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
>>> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
>>> ---
>>>  cpu-exec.c      | 16 +++++++---------
>>>  translate-all.c |  4 ++--
>>>  2 files changed, 9 insertions(+), 11 deletions(-)
>>>
>>> diff --git a/cpu-exec.c b/cpu-exec.c
>>> index d6178eab71d4..c973e3b85922 100644
>>> --- a/cpu-exec.c
>>> +++ b/cpu-exec.c
>>> @@ -338,13 +338,6 @@ static inline TranslationBlock *tb_find_fast(CPUState *cpu,
>>>                   tb->flags != flags)) {
>>>          tb = tb_find_slow(cpu, pc, cs_base, flags);
>>>      }
>>> -    if (cpu->tb_flushed) {
>>> -        /* Ensure that no TB jump will be modified as the
>>> -         * translation buffer has been flushed.
>>> -         */
>>> -        *last_tb = NULL;
>>> -        cpu->tb_flushed = false;
>>> -    }
>>>  #ifndef CONFIG_USER_ONLY
>>>      /* We don't take care of direct jumps when address mapping changes in
>>>       * system emulation. So it's not safe to make a direct jump to a TB
>>> @@ -356,7 +349,12 @@ static inline TranslationBlock *tb_find_fast(CPUState *cpu,
>>>  #endif
>>>      /* See if we can patch the calling TB. */
>>>      if (last_tb && !qemu_loglevel_mask(CPU_LOG_TB_NOCHAIN)) {
>>> -        tb_add_jump(last_tb, tb_exit, tb);
>>> +        /* Check if translation buffer has been flushed */
>>> +        if (cpu->tb_flushed) {
>>> +            cpu->tb_flushed = false;
>>> +        } else {
>>> +            tb_add_jump(last_tb, tb_exit, tb);
>>> +        }
>>>      }
>>>      tb_unlock();
>>>      return tb;
>>> @@ -618,7 +616,7 @@ int cpu_exec(CPUState *cpu)
>>>              }
>>>
>>>              last_tb = NULL; /* forget the last executed TB after exception */
>>> -            cpu->tb_flushed = false; /* reset before first TB lookup */
>>> +            atomic_mb_set(&cpu->tb_flushed, false); /* reset before first TB lookup */
>>>              for(;;) {
>>>                  cpu_handle_interrupt(cpu, &last_tb);
>>>                  tb = tb_find_fast(cpu, last_tb, tb_exit);
>>> diff --git a/translate-all.c b/translate-all.c
>>> index fdf520a86d68..788fed1e0765 100644
>>> --- a/translate-all.c
>>> +++ b/translate-all.c
>>> @@ -845,7 +845,6 @@ void tb_flush(CPUState *cpu)
>>>          > tcg_ctx.code_gen_buffer_size) {
>>>          cpu_abort(cpu, "Internal error: code buffer overflow\n");
>>>      }
>>> -    tcg_ctx.tb_ctx.nb_tbs = 0;
>>>
>>>      CPU_FOREACH(cpu) {
>>>          int i;
>>> @@ -853,9 +852,10 @@ void tb_flush(CPUState *cpu)
>>>          for (i = 0; i < TB_JMP_CACHE_SIZE; ++i) {
>>>              atomic_set(&cpu->tb_jmp_cache[i], NULL);
>>>          }
>>> -        cpu->tb_flushed = true;
>>> +        atomic_mb_set(&cpu->tb_flushed, true);
>>>      }
>>>
>>> +    tcg_ctx.tb_ctx.nb_tbs = 0;
>>>      qht_reset_size(&tcg_ctx.tb_ctx.htable, CODE_GEN_HTABLE_SIZE);
>> I can see the sense of moving the setting of nb_tbs but is it strictly
>> required as part of this patch?
>
> Yes, otherwise tb_alloc() may start allocation TBs from the beginning of
> the translation buffer before 'tb_flushed' is updated.

Ahh yes I see. Thanks

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>


>
> Kind regards,
> Sergey
>
>>
>>>      page_flush_tb();
>> Otherwise:
>>
>> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
>>
>> --
>> Alex Bennée


--
Alex Bennée

  reply	other threads:[~2016-07-14 13:12 UTC|newest]

Thread overview: 49+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-07-12 20:13 [Qemu-devel] [PATCH v3 00/11] Reduce lock contention on TCG hot-path Sergey Fedorov
2016-07-12 20:13 ` [Qemu-devel] [PATCH v3 01/11] util/qht: Document memory ordering assumptions Sergey Fedorov
2016-07-12 23:19   ` Emilio G. Cota
2016-07-13  7:36     ` Paolo Bonzini
2016-07-13 17:50       ` Sergey Fedorov
2016-07-14 13:56         ` Paolo Bonzini
2016-07-14 14:08           ` Sergey Fedorov
2016-07-13 11:13   ` Paolo Bonzini
2016-07-13 18:03     ` Sergey Fedorov
2016-07-14  8:05       ` Paolo Bonzini
2016-07-15 12:37     ` Sergey Fedorov
2016-07-15 12:51       ` Paolo Bonzini
2016-07-15 13:18         ` Sergey Fedorov
2016-07-12 20:13 ` [Qemu-devel] [PATCH v3 02/11] cpu-exec: Pass last_tb by value to tb_find_fast() Sergey Fedorov
2016-07-12 20:13 ` [Qemu-devel] [PATCH v3 03/11] tcg: Prepare safe tb_jmp_cache lookup out of tb_lock Sergey Fedorov
2016-07-14 12:14   ` Alex Bennée
2016-07-12 20:13 ` [Qemu-devel] [PATCH v3 04/11] tcg: Prepare safe access to tb_flushed " Sergey Fedorov
2016-07-14 12:45   ` Alex Bennée
2016-07-14 12:55     ` Sergey Fedorov
2016-07-14 13:12       ` Alex Bennée [this message]
2016-07-12 20:13 ` [Qemu-devel] [PATCH v3 05/11] target-i386: Remove redundant HF_SOFTMMU_MASK Sergey Fedorov
2016-07-14 12:19   ` Alex Bennée
2016-07-12 20:13 ` [Qemu-devel] [PATCH v3 06/11] tcg: Introduce tb_mark_invalid() and tb_is_invalid() Sergey Fedorov
2016-07-14 10:25   ` Alex Bennée
2016-07-14 11:10     ` Sergey Fedorov
2016-07-14 11:48       ` Paolo Bonzini
2016-07-14 12:04         ` Alex Bennée
2016-07-14 12:53   ` Alex Bennée
2016-07-14 13:00     ` Sergey Fedorov
2016-07-14 13:12       ` Paolo Bonzini
2016-07-14 13:15       ` Alex Bennée
2016-07-12 20:13 ` [Qemu-devel] [PATCH v3 07/11] tcg: Prepare TB invalidation for lockless TB lookup Sergey Fedorov
2016-07-14 12:59   ` Alex Bennée
2016-07-14 13:11     ` Sergey Fedorov
2016-07-12 20:13 ` [Qemu-devel] [PATCH v3 08/11] tcg: set up tb->page_addr before insertion Sergey Fedorov
2016-07-12 20:13 ` [Qemu-devel] [PATCH v3 09/11] tcg: cpu-exec: remove tb_lock from the hot-path Sergey Fedorov
2016-07-12 20:13 ` [Qemu-devel] [PATCH v3 10/11] tcg: Avoid bouncing tb_lock between tb_gen_code() and tb_add_jump() Sergey Fedorov
2016-07-14 13:01   ` Alex Bennée
2016-07-14 13:13     ` Sergey Fedorov
2016-07-12 20:13 ` [Qemu-devel] [PATCH v3 11/11] tcg: Merge tb_find_slow() and tb_find_fast() Sergey Fedorov
2016-07-14 13:02   ` Alex Bennée
2016-07-13  7:39 ` [Qemu-devel] [PATCH v3 00/11] Reduce lock contention on TCG hot-path Paolo Bonzini
2016-07-13 17:00   ` Sergey Fedorov
2016-07-14  9:55     ` Alex Bennée
2016-07-14 11:13       ` Sergey Fedorov
2016-07-13 18:06   ` Sergey Fedorov
2016-07-14 12:02   ` Alex Bennée
2016-07-14 12:10     ` Paolo Bonzini
2016-07-14 13:13       ` Alex Bennée

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87inw84b4p.fsf@linaro.org \
    --to=alex.bennee@linaro.org \
    --cc=a.rigo@virtualopensystems.com \
    --cc=bobby.prani@gmail.com \
    --cc=claudio.fontana@huawei.com \
    --cc=cota@braap.org \
    --cc=crosthwaite.peter@gmail.com \
    --cc=fred.konrad@greensocs.com \
    --cc=jan.kiszka@siemens.com \
    --cc=mark.burton@greensocs.com \
    --cc=mttcg@listserver.greensocs.com \
    --cc=patches@linaro.org \
    --cc=pbonzini@redhat.com \
    --cc=peter.maydell@linaro.org \
    --cc=qemu-devel@nongnu.org \
    --cc=rth@twiddle.net \
    --cc=serge.fdrv@gmail.com \
    --cc=sergey.fedorov@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.