From mboxrd@z Thu Jan 1 00:00:00 1970 From: George Dunlap Subject: Re: [PATCH v3 05/13] x86/altp2m: basic data structures and support routines. Date: Tue, 7 Jul 2015 16:04:19 +0100 Message-ID: <559BEA73.3050906@eu.citrix.com> References: <1435774177-6345-1-git-send-email-edmund.h.white@intel.com> <1435774177-6345-6-git-send-email-edmund.h.white@intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <1435774177-6345-6-git-send-email-edmund.h.white@intel.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Ed White , xen-devel@lists.xen.org Cc: Ravi Sahita , Wei Liu , Ian Jackson , Tim Deegan , Jan Beulich , Andrew Cooper , tlengyel@novetta.com, Daniel De Graaf List-Id: xen-devel@lists.xenproject.org On 07/01/2015 07:09 PM, Ed White wrote: > diff --git a/xen/arch/x86/mm/mm-locks.h b/xen/arch/x86/mm/mm-locks.h > index b4f035e..301ca59 100644 > --- a/xen/arch/x86/mm/mm-locks.h > +++ b/xen/arch/x86/mm/mm-locks.h > @@ -217,7 +217,7 @@ declare_mm_lock(nestedp2m) > #define nestedp2m_lock(d) mm_lock(nestedp2m, &(d)->arch.nested_p2m_lock) > #define nestedp2m_unlock(d) mm_unlock(&(d)->arch.nested_p2m_lock) > > -/* P2M lock (per-p2m-table) > +/* P2M lock (per-non-alt-p2m-table) > * > * This protects all queries and updates to the p2m table. > * Queries may be made under the read lock but all modifications > @@ -225,10 +225,44 @@ declare_mm_lock(nestedp2m) > * > * The write lock is recursive as it is common for a code path to look > * up a gfn and later mutate it. > + * > + * Note that this lock shares its implementation with the altp2m > + * lock (not the altp2m list lock), so the implementation > + * is found there. > */ > > declare_mm_rwlock(p2m); > -#define p2m_lock(p) mm_write_lock(p2m, &(p)->lock); > + > +/* Alternate P2M list lock (per-domain) > + * > + * A per-domain lock that protects the list of alternate p2m's. > + * Any operation that walks the list needs to acquire this lock. > + * Additionally, before destroying an alternate p2m all VCPU's > + * in the target domain must be paused. > + */ > + > +declare_mm_lock(altp2mlist) > +#define altp2m_lock(d) mm_lock(altp2mlist, &(d)->arch.altp2m_lock) > +#define altp2m_unlock(d) mm_unlock(&(d)->arch.altp2m_lock) > + > +/* P2M lock (per-altp2m-table) > + * > + * This protects all queries and updates to the p2m table. > + * Queries may be made under the read lock but all modifications > + * need the main (write) lock. > + * > + * The write lock is recursive as it is common for a code path to look > + * up a gfn and later mutate it. > + */ > + > +declare_mm_rwlock(altp2m); > +#define p2m_lock(p) \ > +{ \ > + if ( p2m_is_altp2m(p) ) \ > + mm_write_lock(altp2m, &(p)->lock); \ > + else \ > + mm_write_lock(p2m, &(p)->lock); \ > +} > #define p2m_unlock(p) mm_write_unlock(&(p)->lock); > #define gfn_lock(p,g,o) p2m_lock(p) > #define gfn_unlock(p,g,o) p2m_unlock(p) I've just taken on the role of mm maintainer, and so this late in a series, having Tim's approval and also having Andy's reviewed-by, I'd normally just skim through and Ack it as a matter of course. But I just wouldn't feel right giving this my maintainer's ack without understanding the locking a bit better. So please bear with me here. I see in the cover letter that you "sandwiched" the altp2mlist lock between p2m and altp2m at Tim's suggestion. But I can't find the discussion where that was suggested (it doesn't seem to be in response to v1 patch 5), and it's not immediately obvious to me, either from the code or your comments, what that's for. Can you point me to the discussion, and/or give me a better description as to why it's important to be able to grab the p2m lock before the altp2mlist lock, but the altp2m lock afterwards? Thanks, -George