From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.3 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05740C433E0 for ; Sat, 13 Jun 2020 00:24:54 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C36B020739 for ; Sat, 13 Jun 2020 00:24:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="GVUruxbB" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C36B020739 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jjtyJ-0000LV-Km; Sat, 13 Jun 2020 00:24:31 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jjtyI-0000LQ-H1 for xen-devel@lists.xenproject.org; Sat, 13 Jun 2020 00:24:30 +0000 X-Inumbo-ID: 3f0e6386-ad0c-11ea-b62a-12813bfff9fa Received: from mail.kernel.org (unknown [198.145.29.99]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 3f0e6386-ad0c-11ea-b62a-12813bfff9fa; Sat, 13 Jun 2020 00:24:30 +0000 (UTC) Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 1265220739; Sat, 13 Jun 2020 00:24:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1592007869; bh=hhAXpZ9T/WtwESP/EZN2wTcknQmo1c2sAeZm+M29/gI=; h=Date:From:To:cc:Subject:In-Reply-To:References:From; b=GVUruxbBC8ODbFP2uQ33+k4D0fUwsvJwHfChTLs92dBToyemjrSdxZknhYax0lLcW gbPzzqtWJEwWNMiXd6JiNa9EDtBex+lbcu3BAWOn9kklkrLYIvrwx791mk7ugLJ61G +iv5Dk+FcdwJZllilXbL0IGTBD0WmtdeEK3ahJUo= Date: Fri, 12 Jun 2020 17:24:28 -0700 (PDT) From: Stefano Stabellini X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s To: Bertrand Marquis Subject: Re: [PATCH 1/2] xen/arm: Convert runstate address during hypcall In-Reply-To: <48F66B8F-3AEF-4E54-A572-C246F5A7C117@arm.com> Message-ID: References: <8b450dddb2c855225c97741dce66453a80b9add2.1591806713.git.bertrand.marquis@arm.com> <74475748-e884-1e6e-633d-bf67d5ed32fe@xen.org> <48F66B8F-3AEF-4E54-A572-C246F5A7C117@arm.com> User-Agent: Alpine 2.21 (DEB 202 2017-01-01) MIME-Version: 1.0 Content-Type: multipart/mixed; BOUNDARY="8323329-993750421-1591980292=:2815" Content-ID: X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Julien Grall , Wei Liu , Andrew Cooper , Ian Jackson , George Dunlap , =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= , Jan Beulich , xen-devel , nd , Volodymyr Babchuk , Julien Grall Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" This message is in MIME format. The first part should be readable text, while the remaining parts are likely unreadable without MIME-aware tools. --8323329-993750421-1591980292=:2815 Content-Type: text/plain; CHARSET=UTF-8 Content-Transfer-Encoding: 8BIT Content-ID: On Fri, 12 Jun 2020, Bertrand Marquis wrote: > > On 12 Jun 2020, at 02:09, Stefano Stabellini wrote: > > > > On Thu, 11 Jun 2020, Julien Grall wrote: > >> Hi Stefano, > >> > >> On 11/06/2020 19:50, Stefano Stabellini wrote: > >>> On Thu, 11 Jun 2020, Julien Grall wrote: > >>>>>> + return -EINVAL; > >>>>>> } > >>>>>> > >>>>>> - __copy_to_guest(runstate_guest(v), &runstate, 1); > >>>>>> + v->arch.runstate_guest.page = page; > >>>>>> + v->arch.runstate_guest.offset = offset; > >>>>>> + > >>>>>> + spin_unlock(&v->arch.runstate_guest.lock); > >>>>>> + > >>>>>> + return 0; > >>>>>> +} > >>>>>> + > >>>>>> + > >>>>>> +/* Update per-VCPU guest runstate shared memory area (if registered). > >>>>>> */ > >>>>>> +static void update_runstate_area(struct vcpu *v) > >>>>>> +{ > >>>>>> + struct vcpu_runstate_info *guest_runstate; > >>>>>> + void *p; > >>>>>> + > >>>>>> + spin_lock(&v->arch.runstate_guest.lock); > >>>>>> > >>>>>> - if ( guest_handle ) > >>>>>> + if ( v->arch.runstate_guest.page ) > >>>>>> { > >>>>>> - runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE; > >>>>>> + p = __map_domain_page(v->arch.runstate_guest.page); > >>>>>> + guest_runstate = p + v->arch.runstate_guest.offset; > >>>>>> + > >>>>>> + if ( VM_ASSIST(v->domain, runstate_update_flag) ) > >>>>>> + { > >>>>>> + v->runstate.state_entry_time |= XEN_RUNSTATE_UPDATE; > >>>>>> + guest_runstate->state_entry_time |= XEN_RUNSTATE_UPDATE; > >>>>> > >>>>> I think that this write to guest_runstate should use write_atomic or > >>>>> another atomic write operation. > >>>> > >>>> I thought about suggesting the same, but guest_copy_* helpers may not > >>>> do a single memory write to state_entry_time. > >>>> What are you trying to prevent with the write_atomic()? > >>> > >>> I am thinking that without using an atomic write, it would be (at least > >>> theoretically) possible for a guest to see a partial write to > >>> state_entry_time, which is not good. > >> > >> It is already the case with existing implementation as Xen may write byte by > >> byte. So are you suggesting the existing code is also buggy? > > > > Writing byte by byte is a different case. That is OK. In that case, the > > guest could see the state after 3 bytes written and it would be fine and > > consistent. If this hadn't been the case, then yes, the existing code > > would also be buggy. > > > > So if we did the write with a memcpy, it would be fine, no need for > > atomics: > > > > memcpy(&guest_runstate->state_entry_time, > > &v->runstate.state_entry_time, > > XXX); > > > > > > The |= case is different: GCC could implement it in any way it likes, > > including going through a zero-write to any of the bytes in the word, or > > doing an addition then a subtraction. GCC doesn't make any guarantees. > > If we want guarantees we need to use atomics. > > Wouldn’t that require all accesses to state_entry_time to use also atomic operations ? > In this case we could not propagate the changes to a guest without changing the interface itself. > > As the copy time needs to be protected, the write barriers are there to make sure that during the copy the bit is set and that when we unset it, the copy is done. > I added for this purpose a barrier after the memcpy to make sure that when/if we unset the bit the copy has already been done. As you say, we have a flag to mark a transitiong period, the flag is XEN_RUNSTATE_UPDATE. So, I think it is OK if we don't use atomics during the transitioning period. But we need to make sure to use atomics for the update of the XEN_RUNSTATE_UPDATE flag itself. Does it make sense? Or maybe I misunderstood some of the things you wrote? --8323329-993750421-1591980292=:2815--