From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 29500C433EF for ; Wed, 1 Dec 2021 11:27:54 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.235941.409261 (Exim 4.92) (envelope-from ) id 1msNln-0005R1-D3; Wed, 01 Dec 2021 11:27:27 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 235941.409261; Wed, 01 Dec 2021 11:27:27 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1msNln-0005Qu-9v; Wed, 01 Dec 2021 11:27:27 +0000 Received: by outflank-mailman (input) for mailman id 235941; Wed, 01 Dec 2021 11:27:26 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1msNlm-0005Qo-9F for xen-devel@lists.xenproject.org; Wed, 01 Dec 2021 11:27:26 +0000 Received: from ppsw-43.csi.cam.ac.uk (ppsw-43.csi.cam.ac.uk [131.111.8.143]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 1bee270e-5298-11ec-b945-1df2895da90e; Wed, 01 Dec 2021 12:16:20 +0100 (CET) Received: from hades.srcf.societies.cam.ac.uk ([131.111.179.67]:40142) by ppsw-43.csi.cam.ac.uk (ppsw.cam.ac.uk [131.111.8.139]:25) with esmtps (TLS1.2:ECDHE-RSA-AES256-GCM-SHA384:256) id 1msNlk-000sCi-nZ (Exim 4.95) (return-path ); Wed, 01 Dec 2021 11:27:24 +0000 Received: from [192.168.1.10] (host-92-12-61-86.as13285.net [92.12.61.86]) (Authenticated sender: amc96) by hades.srcf.societies.cam.ac.uk (Postfix) with ESMTPSA id C29B51FAE9; Wed, 1 Dec 2021 11:27:23 +0000 (GMT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 1bee270e-5298-11ec-b945-1df2895da90e X-Cam-AntiVirus: no malware found X-Cam-ScannerInfo: https://help.uis.cam.ac.uk/email-scanner-virus Message-ID: <5585cbf5-6248-ca6f-8b9e-764dbb08be43@srcf.net> Date: Wed, 1 Dec 2021 11:27:23 +0000 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.3.2 Subject: Re: [PATCH 2/2] x86/PoD: move increment of entry count Content-Language: en-GB To: Jan Beulich , "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , Wei Liu , =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= , George Dunlap References: From: Andrew Cooper In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit On 01/12/2021 11:02, Jan Beulich wrote: > When not holding the PoD lock across the entire region covering P2M > update and stats update, the entry count should indicate too large a > value in preference to a too small one, to avoid functions bailing early > when they find the count is zero. Hence increments should happen ahead > of P2M updates, while decrements should happen only after. Deal with the > one place where this hasn't been the case yet. > > Signed-off-by: Jan Beulich > > --- a/xen/arch/x86/mm/p2m-pod.c > +++ b/xen/arch/x86/mm/p2m-pod.c > @@ -1345,19 +1345,15 @@ mark_populate_on_demand(struct domain *d > } > } > > + pod_lock(p2m); > + p2m->pod.entry_count += (1UL << order) - pod_count; > + pod_unlock(p2m); > + > /* Now, actually do the two-way mapping */ > rc = p2m_set_entry(p2m, gfn, INVALID_MFN, order, > p2m_populate_on_demand, p2m->default_access); > if ( rc == 0 ) > - { > - pod_lock(p2m); > - p2m->pod.entry_count += 1UL << order; > - p2m->pod.entry_count -= pod_count; > - BUG_ON(p2m->pod.entry_count < 0); > - pod_unlock(p2m); > - > ioreq_request_mapcache_invalidate(d); > - } > else if ( order ) > { > /* > @@ -1369,6 +1365,13 @@ mark_populate_on_demand(struct domain *d > d, gfn_l, order, rc); > domain_crash(d); > } > + else if ( !pod_count ) > + { > + pod_lock(p2m); > + BUG_ON(!p2m->pod.entry_count); > + --p2m->pod.entry_count; > + pod_unlock(p2m); > + } > > out: > gfn_unlock(p2m, gfn, order); This email appears to contain the same patch twice, presumably split at this point. Which one should be reviewed? ~Andrew > When not holding the PoD lock across the entire region covering P2M > update and stats update, the entry count should indicate too large a > value in preference to a too small one, to avoid functions bailing early > when they find the count is zero. Hence increments should happen ahead > of P2M updates, while decrements should happen only after. Deal with the > one place where this hasn't been the case yet. > > Signed-off-by: Jan Beulich > > --- a/xen/arch/x86/mm/p2m-pod.c > +++ b/xen/arch/x86/mm/p2m-pod.c > @@ -1345,19 +1345,15 @@ mark_populate_on_demand(struct domain *d > } > } > > + pod_lock(p2m); > + p2m->pod.entry_count += (1UL << order) - pod_count; > + pod_unlock(p2m); > + > /* Now, actually do the two-way mapping */ > rc = p2m_set_entry(p2m, gfn, INVALID_MFN, order, > p2m_populate_on_demand, p2m->default_access); > if ( rc == 0 ) > - { > - pod_lock(p2m); > - p2m->pod.entry_count += 1UL << order; > - p2m->pod.entry_count -= pod_count; > - BUG_ON(p2m->pod.entry_count < 0); > - pod_unlock(p2m); > - > ioreq_request_mapcache_invalidate(d); > - } > else if ( order ) > { > /* > @@ -1369,6 +1365,13 @@ mark_populate_on_demand(struct domain *d > d, gfn_l, order, rc); > domain_crash(d); > } > + else if ( !pod_count ) > + { > + pod_lock(p2m); > + BUG_ON(!p2m->pod.entry_count); > + --p2m->pod.entry_count; > + pod_unlock(p2m); > + } > > out: > gfn_unlock(p2m, gfn, order); > >