From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:60091) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bBOvd-0006XT-CV for qemu-devel@nongnu.org; Fri, 10 Jun 2016 12:05:02 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bBOva-0001fs-2o for qemu-devel@nongnu.org; Fri, 10 Jun 2016 12:05:01 -0400 Received: from mail-io0-x241.google.com ([2607:f8b0:4001:c06::241]:33299) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bBOvZ-0001fO-S2 for qemu-devel@nongnu.org; Fri, 10 Jun 2016 12:04:58 -0400 Received: by mail-io0-x241.google.com with SMTP id 5so9462441ioy.0 for ; Fri, 10 Jun 2016 09:04:57 -0700 (PDT) MIME-Version: 1.0 In-Reply-To: <575AE415.3040108@gmail.com> References: <20160526163549.3276-1-a.rigo@virtualopensystems.com> <20160526163549.3276-3-a.rigo@virtualopensystems.com> <575ADAF4.3010900@gmail.com> <575AE415.3040108@gmail.com> From: alvise rigo Date: Fri, 10 Jun 2016 18:04:42 +0200 Message-ID: Content-Type: text/plain; charset=UTF-8 Subject: Re: [Qemu-devel] [RFC 02/10] softmmu_llsc_template.h: Move to multi-threading List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Sergey Fedorov Cc: MTTCG Devel , =?UTF-8?B?QWxleCBCZW5uw6ll?= , QEMU Developers , Jani Kokkonen , Claudio Fontana , VirtualOpenSystems Technical Team , =?UTF-8?B?S09OUkFEIEZyw6lkw6lyaWM=?= , Paolo Bonzini , Richard Henderson , "Emilio G. Cota" , Peter Maydell This would require to fill again the whole history which I find very unlikely. In any case, this has to be documented. Thank you, alvise On Fri, Jun 10, 2016 at 6:00 PM, Sergey Fedorov wrote: > On 10/06/16 18:53, alvise rigo wrote: >> On Fri, Jun 10, 2016 at 5:21 PM, Sergey Fedorov wrote: >>> On 26/05/16 19:35, Alvise Rigo wrote: >>>> Using tcg_exclusive_{lock,unlock}(), make the emulation of >>>> LoadLink/StoreConditional thread safe. >>>> >>>> During an LL access, this lock protects the load access itself, the >>>> update of the exclusive history and the update of the VCPU's protected >>>> range. In a SC access, the lock protects the store access itself, the >>>> possible reset of other VCPUs' protected range and the reset of the >>>> exclusive context of calling VCPU. >>>> >>>> The lock is also taken when a normal store happens to access an >>>> exclusive page to reset other VCPUs' protected range in case of >>>> collision. >>> I think the key problem here is that the load in LL helper can race with >>> a concurrent regular fast-path store. It's probably easier to annotate >>> the source here: >>> >>> 1 WORD_TYPE helper_ldlink_name(CPUArchState *env, target_ulong addr, >>> 2 TCGMemOpIdx oi, uintptr_t retaddr) >>> 3 { >>> 4 WORD_TYPE ret; >>> 5 int index; >>> 6 CPUState *this_cpu = ENV_GET_CPU(env); >>> 7 CPUClass *cc = CPU_GET_CLASS(this_cpu); >>> 8 hwaddr hw_addr; >>> 9 unsigned mmu_idx = get_mmuidx(oi); >>> >>> 10 index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); >>> >>> 11 tcg_exclusive_lock(); >>> >>> 12 /* Use the proper load helper from cpu_ldst.h */ >>> 13 ret = helper_ld(env, addr, oi, retaddr); >>> >>> 14 /* hw_addr = hwaddr of the page (i.e. section->mr->ram_addr >>> + xlat) >>> 15 * plus the offset (i.e. addr & ~TARGET_PAGE_MASK) */ >>> 16 hw_addr = (env->iotlb[mmu_idx][index].addr & >>> TARGET_PAGE_MASK) + addr; >>> 17 if (likely(!(env->tlb_table[mmu_idx][index].addr_read & >>> TLB_MMIO))) { >>> 18 /* If all the vCPUs have the EXCL bit set for this page >>> there is no need >>> 19 * to request any flush. */ >>> 20 if (cpu_physical_memory_set_excl(hw_addr)) { >>> 21 CPUState *cpu; >>> >>> 22 excl_history_put_addr(hw_addr); >>> 23 CPU_FOREACH(cpu) { >>> 24 if (this_cpu != cpu) { >>> 25 tlb_flush_other(this_cpu, cpu, 1); >>> 26 } >>> 27 } >>> 28 } >>> 29 /* For this vCPU, just update the TLB entry, no need to >>> flush. */ >>> 30 env->tlb_table[mmu_idx][index].addr_write |= TLB_EXCL; >>> 31 } else { >>> 32 /* Set a pending exclusive access in the MemoryRegion */ >>> 33 MemoryRegion *mr = iotlb_to_region(this_cpu, >>> 34 >>> env->iotlb[mmu_idx][index].addr, >>> 35 >>> env->iotlb[mmu_idx][index].attrs); >>> 36 mr->pending_excl_access = true; >>> 37 } >>> >>> 38 cc->cpu_set_excl_protected_range(this_cpu, hw_addr, DATA_SIZE); >>> >>> 39 tcg_exclusive_unlock(); >>> >>> 40 /* From now on we are in LL/SC context */ >>> 41 this_cpu->ll_sc_context = true; >>> >>> 42 return ret; >>> 43 } >>> >>> >>> The exclusive lock at line 11 doesn't help if concurrent fast-patch >>> store at this address occurs after we finished load at line 13 but >>> before TLB is flushed as a result of line 25. If we reorder the load to >>> happen after the TLB flush request we still must be sure that the flush >>> is complete before we can do the load safely. >> You are right, the risk actually exists. One solution to the problem >> could be to ignore the data acquired by the load and redo the LL after >> the flushes have been completed (basically the disas_ctx->pc points to >> the LL instruction). This time the LL will happen without flush >> requests and the access will be actually protected by the lock. > > Yes, if some other CPU wouldn't evict an entry with the same address > from the exclusive history... > > Kind regards, > Sergey