From: Jonathan Corbet <corbet@lwn.net> To: Tim Chen <tim.c.chen@linux.intel.com> Cc: Andrew Morton <akpm@linux-foundation.org>, "Huang, Ying" <ying.huang@intel.com>, dave.hansen@intel.com, ak@linux.intel.com, aaron.lu@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Hugh Dickins <hughd@google.com>, Shaohua Li <shli@kernel.org>, Minchan Kim <minchan@kernel.org>, Rik van Riel <riel@redhat.com>, Andrea Arcangeli <aarcange@redhat.com>, "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>, Vladimir Davydov <vdavydov.dev@gmail.com>, Johannes Weiner <hannes@cmpxchg.org>, Michal Hocko <mhocko@kernel.org>, Hillf Danton <hillf.zj@alibaba-inc.com> Subject: Re: [PATCH v2 2/8] mm/swap: Add cluster lock Date: Mon, 24 Oct 2016 10:31:33 -0600 [thread overview] Message-ID: <20161024103133.7c1a8f83@lwn.net> (raw) In-Reply-To: <f399f0381db2e6d6bba804d139f5f41725137337.1477004978.git.tim.c.chen@linux.intel.com> On Thu, 20 Oct 2016 16:31:41 -0700 Tim Chen <tim.c.chen@linux.intel.com> wrote: > From: "Huang, Ying" <ying.huang@intel.com> > > This patch is to reduce the lock contention of swap_info_struct->lock > via using a more fine grained lock in swap_cluster_info for some swap > operations. swap_info_struct->lock is heavily contended if multiple > processes reclaim pages simultaneously. Because there is only one lock > for each swap device. While in common configuration, there is only one > or several swap devices in the system. The lock protects almost all > swap related operations. So I'm looking at this a bit. Overall it seems like a good thing to do (from my limited understanding of this area) but I have a probably silly question... > struct swap_cluster_info { > - unsigned int data:24; > - unsigned int flags:8; > + unsigned long data; > }; > -#define CLUSTER_FLAG_FREE 1 /* This cluster is free */ > -#define CLUSTER_FLAG_NEXT_NULL 2 /* This cluster has no next cluster */ > +#define CLUSTER_COUNT_SHIFT 8 > +#define CLUSTER_FLAG_MASK ((1UL << CLUSTER_COUNT_SHIFT) - 1) > +#define CLUSTER_COUNT_MASK (~CLUSTER_FLAG_MASK) > +#define CLUSTER_FLAG_FREE 1 /* This cluster is free */ > +#define CLUSTER_FLAG_NEXT_NULL 2 /* This cluster has no next cluster */ > +/* cluster lock, protect cluster_info contents and sis->swap_map */ > +#define CLUSTER_FLAG_LOCK_BIT 2 > +#define CLUSTER_FLAG_LOCK (1 << CLUSTER_FLAG_LOCK_BIT) Why the roll-your-own locking and data structures here? To my naive understanding, it seems like you could do something like: struct swap_cluster_info { spinlock_t lock; atomic_t count; unsigned int flags; }; Then you could use proper spinlock operations which, among other things, would make the realtime folks happier. That might well help with the cache-line sharing issues as well. Some of the count manipulations could perhaps be done without the lock entirely; similarly, atomic bitops might save you the locking for some of the flag tweaks - though I'd have to look more closely to be really sure of that. The cost, of course, is the growth of this structure, but you've already noted that the overhead isn't all that high; seems like it could be worth it. I assume that I'm missing something obvious here? Thanks, jon
WARNING: multiple messages have this Message-ID (diff)
From: Jonathan Corbet <corbet@lwn.net> To: Tim Chen <tim.c.chen@linux.intel.com> Cc: Andrew Morton <akpm@linux-foundation.org>, "Huang, Ying" <ying.huang@intel.com>, dave.hansen@intel.com, ak@linux.intel.com, aaron.lu@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Hugh Dickins <hughd@google.com>, Shaohua Li <shli@kernel.org>, Minchan Kim <minchan@kernel.org>, Rik van Riel <riel@redhat.com>, Andrea Arcangeli <aarcange@redhat.com>, "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>, Vladimir Davydov <vdavydov.dev@gmail.com>, Johannes Weiner <hannes@cmpxchg.org>, Michal Hocko <mhocko@kernel.org>, Hillf Danton <hillf.zj@alibaba-inc.com> Subject: Re: [PATCH v2 2/8] mm/swap: Add cluster lock Date: Mon, 24 Oct 2016 10:31:33 -0600 [thread overview] Message-ID: <20161024103133.7c1a8f83@lwn.net> (raw) In-Reply-To: <f399f0381db2e6d6bba804d139f5f41725137337.1477004978.git.tim.c.chen@linux.intel.com> On Thu, 20 Oct 2016 16:31:41 -0700 Tim Chen <tim.c.chen@linux.intel.com> wrote: > From: "Huang, Ying" <ying.huang@intel.com> > > This patch is to reduce the lock contention of swap_info_struct->lock > via using a more fine grained lock in swap_cluster_info for some swap > operations. swap_info_struct->lock is heavily contended if multiple > processes reclaim pages simultaneously. Because there is only one lock > for each swap device. While in common configuration, there is only one > or several swap devices in the system. The lock protects almost all > swap related operations. So I'm looking at this a bit. Overall it seems like a good thing to do (from my limited understanding of this area) but I have a probably silly question... > struct swap_cluster_info { > - unsigned int data:24; > - unsigned int flags:8; > + unsigned long data; > }; > -#define CLUSTER_FLAG_FREE 1 /* This cluster is free */ > -#define CLUSTER_FLAG_NEXT_NULL 2 /* This cluster has no next cluster */ > +#define CLUSTER_COUNT_SHIFT 8 > +#define CLUSTER_FLAG_MASK ((1UL << CLUSTER_COUNT_SHIFT) - 1) > +#define CLUSTER_COUNT_MASK (~CLUSTER_FLAG_MASK) > +#define CLUSTER_FLAG_FREE 1 /* This cluster is free */ > +#define CLUSTER_FLAG_NEXT_NULL 2 /* This cluster has no next cluster */ > +/* cluster lock, protect cluster_info contents and sis->swap_map */ > +#define CLUSTER_FLAG_LOCK_BIT 2 > +#define CLUSTER_FLAG_LOCK (1 << CLUSTER_FLAG_LOCK_BIT) Why the roll-your-own locking and data structures here? To my naive understanding, it seems like you could do something like: struct swap_cluster_info { spinlock_t lock; atomic_t count; unsigned int flags; }; Then you could use proper spinlock operations which, among other things, would make the realtime folks happier. That might well help with the cache-line sharing issues as well. Some of the count manipulations could perhaps be done without the lock entirely; similarly, atomic bitops might save you the locking for some of the flag tweaks - though I'd have to look more closely to be really sure of that. The cost, of course, is the growth of this structure, but you've already noted that the overhead isn't all that high; seems like it could be worth it. I assume that I'm missing something obvious here? Thanks, jon -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2016-10-24 16:31 UTC|newest] Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top 2016-10-20 23:31 [PATCH v2 0/8] mm/swap: Regular page swap optimizations Tim Chen 2016-10-20 23:31 ` Tim Chen 2016-10-20 23:31 ` [PATCH v2 1/8] mm/swap: Fix kernel message in swap_info_get() Tim Chen 2016-10-20 23:31 ` Tim Chen 2016-10-20 23:31 ` [PATCH v2 2/8] mm/swap: Add cluster lock Tim Chen 2016-10-20 23:31 ` Tim Chen 2016-10-24 16:31 ` Jonathan Corbet [this message] 2016-10-24 16:31 ` Jonathan Corbet 2016-10-25 2:05 ` Huang, Ying 2016-10-25 2:05 ` Huang, Ying 2016-12-28 3:34 ` huang ying 2016-12-28 3:34 ` huang ying 2016-10-20 23:31 ` [PATCH v2 3/8] mm/swap: Split swap cache into 64MB trunks Tim Chen 2016-10-20 23:31 ` Tim Chen 2016-10-20 23:31 ` [PATCH v2 4/8] mm/swap: skip read ahead for unreferenced swap slots Tim Chen 2016-10-20 23:31 ` Tim Chen 2016-10-20 23:31 ` [PATCH v2 5/8] mm/swap: Allocate swap slots in batches Tim Chen 2016-10-20 23:31 ` Tim Chen 2016-10-20 23:31 ` [PATCH v2 6/8] mm/swap: Free swap slots in batch Tim Chen 2016-10-20 23:31 ` Tim Chen 2016-10-20 23:31 ` [PATCH v2 7/8] mm/swap: Add cache for swap slots allocation Tim Chen 2016-10-20 23:31 ` Tim Chen 2016-10-20 23:31 ` [PATCH v2 8/8] mm/swap: Enable swap slots cache usage Tim Chen 2016-10-20 23:31 ` Tim Chen 2016-10-21 8:16 ` [PATCH v2 0/8] mm/swap: Regular page swap optimizations Christian Borntraeger 2016-10-21 8:16 ` Christian Borntraeger 2016-10-21 10:05 ` Christian Borntraeger 2016-10-21 10:05 ` Christian Borntraeger 2016-10-21 17:40 ` Tim Chen 2016-10-21 17:40 ` Tim Chen
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20161024103133.7c1a8f83@lwn.net \ --to=corbet@lwn.net \ --cc=aarcange@redhat.com \ --cc=aaron.lu@intel.com \ --cc=ak@linux.intel.com \ --cc=akpm@linux-foundation.org \ --cc=dave.hansen@intel.com \ --cc=hannes@cmpxchg.org \ --cc=hillf.zj@alibaba-inc.com \ --cc=hughd@google.com \ --cc=kirill.shutemov@linux.intel.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=mhocko@kernel.org \ --cc=minchan@kernel.org \ --cc=riel@redhat.com \ --cc=shli@kernel.org \ --cc=tim.c.chen@linux.intel.com \ --cc=vdavydov.dev@gmail.com \ --cc=ying.huang@intel.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.