From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:53691) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bLUxd-0000zz-Q9 for qemu-devel@nongnu.org; Fri, 08 Jul 2016 08:32:50 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bLUxY-0003Sg-NW for qemu-devel@nongnu.org; Fri, 08 Jul 2016 08:32:48 -0400 Received: from mail-lf0-x242.google.com ([2a00:1450:4010:c07::242]:33253) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bLUxY-0003SU-Fs for qemu-devel@nongnu.org; Fri, 08 Jul 2016 08:32:44 -0400 Received: by mail-lf0-x242.google.com with SMTP id l188so6385074lfe.0 for ; Fri, 08 Jul 2016 05:32:44 -0700 (PDT) References: <1467735496-16256-7-git-send-email-alex.bennee@linaro.org> <1467909880-18834-1-git-send-email-sergey.fedorov@linaro.org> <1467909880-18834-4-git-send-email-sergey.fedorov@linaro.org> <71295249.5198305.1467967205882.JavaMail.zimbra@redhat.com> <577F7F7D.4010207@gmail.com> <1711483720.5250530.1467975750519.JavaMail.zimbra@redhat.com> From: Sergey Fedorov Message-ID: <577F9D69.3050600@gmail.com> Date: Fri, 8 Jul 2016 15:32:41 +0300 MIME-Version: 1.0 In-Reply-To: <1711483720.5250530.1467975750519.JavaMail.zimbra@redhat.com> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH 3/3] tcg: Avoid bouncing tb_lock between tb_gen_code() and tb_add_jump() List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Paolo Bonzini Cc: Sergey Fedorov , qemu-devel@nongnu.org, =?UTF-8?Q?Alex_Benn=c3=a9e?= , patches@linaro.org, mttcg@greensocs.com, fred konrad , a rigo , cota@braap.org, bobby prani , rth@twiddle.net, mark burton , jan kiszka , peter maydell , claudio fontana , Peter Crosthwaite On 08/07/16 14:02, Paolo Bonzini wrote: >> On 08/07/16 11:40, Paolo Bonzini wrote: >>> Even better: add a "bool *tb_locked" argument to tb_find_slow, and >>> don't move the mmap_lock release. Then tb_find_fast knows directly >>> whether tb_lock is taken, and you don't need any of tb_lock_reset >>> or mmap_lock_reset. >> I think we can do even better. One option is using a separate tiny lock >> to protect direct jump set/reset instead of tb_lock. > If you have to use a separate tiny lock, you don't gain anything compared > to the two critical sections, do you? If we have a separate lock for direct jump set/reset then we can do fast TB lookup + direct jump patching without taking tb_lock at all. How much this would reduce lock contention largely depends on the workload we use. > > In any case, this seems to be more complex than necessary. The "bool *" > convention is pretty common in Linux for example---it works well. It could work, no doubts. > > The one below is even more complicated. I'm all for simple lock-free > stacks (e.g. QSLIST_INSERT_HEAD_ATOMIC and QSLIST_MOVE_ATOMIC), but lock-free > lists are too much, especially if with the complicated first/next mechanism > of TCG's chained block lists. Direct jump handling code is pretty isolated and self-contained. It would require to back out of tb_remove_from_jmp_list() and sprinkle a couple of atomic_rcu_read()/atomic_rcu_set() with some comments, I think. Maybe it could be easier to justify looking at actual patches. Thanks, Sergey > > Paolo > >> Another option which I've had in my mind for some time is to make direct >> jump set/reset thread-safe. We already have thread-safe TB patching. The >> only question is the right order of operations and handling >> jmp_list_next/jmp_list_first safely. I think that could be done by >> removing tb_remove_from_jmp_list() and making RCU-like manipulation with >> jmp_list_next/jmp_list_first. What do you think? >> >> Kind regards, >> Sergey >>