All of lore.kernel.org
 help / color / mirror / Atom feed
From: huang ying <huang.ying.caritas@gmail.com>
To: "Huang, Ying" <ying.huang@intel.com>
Cc: Jonathan Corbet <corbet@lwn.net>,
	Tim Chen <tim.c.chen@linux.intel.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	dave.hansen@intel.com, Andi Kleen <ak@linux.intel.com>,
	Aaron Lu <aaron.lu@intel.com>,
	linux-mm@kvack.org, LKML <linux-kernel@vger.kernel.org>,
	Hugh Dickins <hughd@google.com>, Shaohua Li <shli@kernel.org>,
	Minchan Kim <minchan@kernel.org>, Rik van Riel <riel@redhat.com>,
	Andrea Arcangeli <aarcange@redhat.com>,
	"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>,
	Vladimir Davydov <vdavydov.dev@gmail.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Michal Hocko <mhocko@kernel.org>,
	Hillf Danton <hillf.zj@alibaba-inc.com>
Subject: Re: [PATCH v2 2/8] mm/swap: Add cluster lock
Date: Wed, 28 Dec 2016 11:34:01 +0800	[thread overview]
Message-ID: <CAC=cRTNTpnqOp5-G+c4dEPdADeL2m=zorvFBoY8sYaWKCwGOgg@mail.gmail.com> (raw)
In-Reply-To: <87k2cxkwss.fsf@yhuang-dev.intel.com>

Hi, Jonathan,

On Tue, Oct 25, 2016 at 10:05 AM, Huang, Ying <ying.huang@intel.com> wrote:
> Hi, Jonathan,
>
> Thanks for review.
>
> Jonathan Corbet <corbet@lwn.net> writes:
>
>> On Thu, 20 Oct 2016 16:31:41 -0700
>> Tim Chen <tim.c.chen@linux.intel.com> wrote:
>>
>>> From: "Huang, Ying" <ying.huang@intel.com>
>>>
>>> This patch is to reduce the lock contention of swap_info_struct->lock
>>> via using a more fine grained lock in swap_cluster_info for some swap
>>> operations.  swap_info_struct->lock is heavily contended if multiple
>>> processes reclaim pages simultaneously.  Because there is only one lock
>>> for each swap device.  While in common configuration, there is only one
>>> or several swap devices in the system.  The lock protects almost all
>>> swap related operations.
>>
>> So I'm looking at this a bit.  Overall it seems like a good thing to do
>> (from my limited understanding of this area) but I have a probably silly
>> question...
>>
>>>  struct swap_cluster_info {
>>> -    unsigned int data:24;
>>> -    unsigned int flags:8;
>>> +    unsigned long data;
>>>  };
>>> -#define CLUSTER_FLAG_FREE 1 /* This cluster is free */
>>> -#define CLUSTER_FLAG_NEXT_NULL 2 /* This cluster has no next cluster */
>>> +#define CLUSTER_COUNT_SHIFT         8
>>> +#define CLUSTER_FLAG_MASK           ((1UL << CLUSTER_COUNT_SHIFT) - 1)
>>> +#define CLUSTER_COUNT_MASK          (~CLUSTER_FLAG_MASK)
>>> +#define CLUSTER_FLAG_FREE           1 /* This cluster is free */
>>> +#define CLUSTER_FLAG_NEXT_NULL              2 /* This cluster has no next cluster */
>>> +/* cluster lock, protect cluster_info contents and sis->swap_map */
>>> +#define CLUSTER_FLAG_LOCK_BIT               2
>>> +#define CLUSTER_FLAG_LOCK           (1 << CLUSTER_FLAG_LOCK_BIT)
>>
>> Why the roll-your-own locking and data structures here?  To my naive
>> understanding, it seems like you could do something like:
>>
>>   struct swap_cluster_info {
>>       spinlock_t lock;
>>       atomic_t count;
>>       unsigned int flags;
>>   };
>>
>> Then you could use proper spinlock operations which, among other things,
>> would make the realtime folks happier.  That might well help with the
>> cache-line sharing issues as well.  Some of the count manipulations could
>> perhaps be done without the lock entirely; similarly, atomic bitops might
>> save you the locking for some of the flag tweaks - though I'd have to look
>> more closely to be really sure of that.
>>
>> The cost, of course, is the growth of this structure, but you've already
>> noted that the overhead isn't all that high; seems like it could be worth
>> it.
>
> Yes.  The data structure you proposed is much easier to be used than the
> current one.  The main concern is the RAM usage.  The size of the data
> structure you proposed is about 80 bytes, while that of the current one
> is about 8 bytes.  There will be one struct swap_cluster_info for every
> 1MB swap space, so for 1TB swap space, the total size will be 80M
> compared with 8M of current implementation.

Sorry, I turned on the lockdep when measure the size change, so the
previous size change data is wrong.  The size of the data structure
you proposed is 12 bytes.  While that of the current one is 8 bytes on
64 bit platform and 4 bytes on 32 bit platform.

Best Regards,
Huang, Ying

WARNING: multiple messages have this Message-ID (diff)
From: huang ying <huang.ying.caritas@gmail.com>
To: "Huang, Ying" <ying.huang@intel.com>
Cc: Jonathan Corbet <corbet@lwn.net>,
	Tim Chen <tim.c.chen@linux.intel.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	dave.hansen@intel.com, Andi Kleen <ak@linux.intel.com>,
	Aaron Lu <aaron.lu@intel.com>,
	linux-mm@kvack.org, LKML <linux-kernel@vger.kernel.org>,
	Hugh Dickins <hughd@google.com>, Shaohua Li <shli@kernel.org>,
	Minchan Kim <minchan@kernel.org>, Rik van Riel <riel@redhat.com>,
	Andrea Arcangeli <aarcange@redhat.com>,
	"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>,
	Vladimir Davydov <vdavydov.dev@gmail.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Michal Hocko <mhocko@kernel.org>,
	Hillf Danton <hillf.zj@alibaba-inc.com>
Subject: Re: [PATCH v2 2/8] mm/swap: Add cluster lock
Date: Wed, 28 Dec 2016 11:34:01 +0800	[thread overview]
Message-ID: <CAC=cRTNTpnqOp5-G+c4dEPdADeL2m=zorvFBoY8sYaWKCwGOgg@mail.gmail.com> (raw)
In-Reply-To: <87k2cxkwss.fsf@yhuang-dev.intel.com>

Hi, Jonathan,

On Tue, Oct 25, 2016 at 10:05 AM, Huang, Ying <ying.huang@intel.com> wrote:
> Hi, Jonathan,
>
> Thanks for review.
>
> Jonathan Corbet <corbet@lwn.net> writes:
>
>> On Thu, 20 Oct 2016 16:31:41 -0700
>> Tim Chen <tim.c.chen@linux.intel.com> wrote:
>>
>>> From: "Huang, Ying" <ying.huang@intel.com>
>>>
>>> This patch is to reduce the lock contention of swap_info_struct->lock
>>> via using a more fine grained lock in swap_cluster_info for some swap
>>> operations.  swap_info_struct->lock is heavily contended if multiple
>>> processes reclaim pages simultaneously.  Because there is only one lock
>>> for each swap device.  While in common configuration, there is only one
>>> or several swap devices in the system.  The lock protects almost all
>>> swap related operations.
>>
>> So I'm looking at this a bit.  Overall it seems like a good thing to do
>> (from my limited understanding of this area) but I have a probably silly
>> question...
>>
>>>  struct swap_cluster_info {
>>> -    unsigned int data:24;
>>> -    unsigned int flags:8;
>>> +    unsigned long data;
>>>  };
>>> -#define CLUSTER_FLAG_FREE 1 /* This cluster is free */
>>> -#define CLUSTER_FLAG_NEXT_NULL 2 /* This cluster has no next cluster */
>>> +#define CLUSTER_COUNT_SHIFT         8
>>> +#define CLUSTER_FLAG_MASK           ((1UL << CLUSTER_COUNT_SHIFT) - 1)
>>> +#define CLUSTER_COUNT_MASK          (~CLUSTER_FLAG_MASK)
>>> +#define CLUSTER_FLAG_FREE           1 /* This cluster is free */
>>> +#define CLUSTER_FLAG_NEXT_NULL              2 /* This cluster has no next cluster */
>>> +/* cluster lock, protect cluster_info contents and sis->swap_map */
>>> +#define CLUSTER_FLAG_LOCK_BIT               2
>>> +#define CLUSTER_FLAG_LOCK           (1 << CLUSTER_FLAG_LOCK_BIT)
>>
>> Why the roll-your-own locking and data structures here?  To my naive
>> understanding, it seems like you could do something like:
>>
>>   struct swap_cluster_info {
>>       spinlock_t lock;
>>       atomic_t count;
>>       unsigned int flags;
>>   };
>>
>> Then you could use proper spinlock operations which, among other things,
>> would make the realtime folks happier.  That might well help with the
>> cache-line sharing issues as well.  Some of the count manipulations could
>> perhaps be done without the lock entirely; similarly, atomic bitops might
>> save you the locking for some of the flag tweaks - though I'd have to look
>> more closely to be really sure of that.
>>
>> The cost, of course, is the growth of this structure, but you've already
>> noted that the overhead isn't all that high; seems like it could be worth
>> it.
>
> Yes.  The data structure you proposed is much easier to be used than the
> current one.  The main concern is the RAM usage.  The size of the data
> structure you proposed is about 80 bytes, while that of the current one
> is about 8 bytes.  There will be one struct swap_cluster_info for every
> 1MB swap space, so for 1TB swap space, the total size will be 80M
> compared with 8M of current implementation.

Sorry, I turned on the lockdep when measure the size change, so the
previous size change data is wrong.  The size of the data structure
you proposed is 12 bytes.  While that of the current one is 8 bytes on
64 bit platform and 4 bytes on 32 bit platform.

Best Regards,
Huang, Ying

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2016-12-28  3:34 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-10-20 23:31 [PATCH v2 0/8] mm/swap: Regular page swap optimizations Tim Chen
2016-10-20 23:31 ` Tim Chen
2016-10-20 23:31 ` [PATCH v2 1/8] mm/swap: Fix kernel message in swap_info_get() Tim Chen
2016-10-20 23:31   ` Tim Chen
2016-10-20 23:31 ` [PATCH v2 2/8] mm/swap: Add cluster lock Tim Chen
2016-10-20 23:31   ` Tim Chen
2016-10-24 16:31   ` Jonathan Corbet
2016-10-24 16:31     ` Jonathan Corbet
2016-10-25  2:05     ` Huang, Ying
2016-10-25  2:05       ` Huang, Ying
2016-12-28  3:34       ` huang ying [this message]
2016-12-28  3:34         ` huang ying
2016-10-20 23:31 ` [PATCH v2 3/8] mm/swap: Split swap cache into 64MB trunks Tim Chen
2016-10-20 23:31   ` Tim Chen
2016-10-20 23:31 ` [PATCH v2 4/8] mm/swap: skip read ahead for unreferenced swap slots Tim Chen
2016-10-20 23:31   ` Tim Chen
2016-10-20 23:31 ` [PATCH v2 5/8] mm/swap: Allocate swap slots in batches Tim Chen
2016-10-20 23:31   ` Tim Chen
2016-10-20 23:31 ` [PATCH v2 6/8] mm/swap: Free swap slots in batch Tim Chen
2016-10-20 23:31   ` Tim Chen
2016-10-20 23:31 ` [PATCH v2 7/8] mm/swap: Add cache for swap slots allocation Tim Chen
2016-10-20 23:31   ` Tim Chen
2016-10-20 23:31 ` [PATCH v2 8/8] mm/swap: Enable swap slots cache usage Tim Chen
2016-10-20 23:31   ` Tim Chen
2016-10-21  8:16 ` [PATCH v2 0/8] mm/swap: Regular page swap optimizations Christian Borntraeger
2016-10-21  8:16   ` Christian Borntraeger
2016-10-21 10:05   ` Christian Borntraeger
2016-10-21 10:05     ` Christian Borntraeger
2016-10-21 17:40     ` Tim Chen
2016-10-21 17:40       ` Tim Chen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAC=cRTNTpnqOp5-G+c4dEPdADeL2m=zorvFBoY8sYaWKCwGOgg@mail.gmail.com' \
    --to=huang.ying.caritas@gmail.com \
    --cc=aarcange@redhat.com \
    --cc=aaron.lu@intel.com \
    --cc=ak@linux.intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=corbet@lwn.net \
    --cc=dave.hansen@intel.com \
    --cc=hannes@cmpxchg.org \
    --cc=hillf.zj@alibaba-inc.com \
    --cc=hughd@google.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=minchan@kernel.org \
    --cc=riel@redhat.com \
    --cc=shli@kernel.org \
    --cc=tim.c.chen@linux.intel.com \
    --cc=vdavydov.dev@gmail.com \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.