From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752719AbaEBUAb (ORCPT ); Fri, 2 May 2014 16:00:31 -0400 Received: from mail-wi0-f169.google.com ([209.85.212.169]:45926 "EHLO mail-wi0-f169.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751643AbaEBUAa (ORCPT ); Fri, 2 May 2014 16:00:30 -0400 MIME-Version: 1.0 In-Reply-To: References: <1397336454-13855-1-git-send-email-ddstreet@ieee.org> <1397336454-13855-2-git-send-email-ddstreet@ieee.org> <20140423103400.GH23991@suse.de> From: Dan Streetman Date: Fri, 2 May 2014 16:00:09 -0400 X-Google-Sender-Auth: e_vUO0UmTIZfpgz7kRxJaBLUQoM Message-ID: Subject: Re: [PATCH 1/2] swap: change swap_info singly-linked list to list_head To: Weijie Yang , Konrad Rzeszutek Wilk , Boris Ostrovsky , xen-devel@lists.xenproject.org Cc: Mel Gorman , Hugh Dickins , Andrew Morton , Michal Hocko , Christian Ehrhardt , Shaohua Li , Weijie Yang , Linux-MM , linux-kernel , David Vrabel Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Apr 25, 2014 at 12:15 AM, Weijie Yang wrote: > On Fri, Apr 25, 2014 at 2:48 AM, Dan Streetman wrote: >> On Wed, Apr 23, 2014 at 6:34 AM, Mel Gorman wrote: >>> On Sat, Apr 12, 2014 at 05:00:53PM -0400, Dan Streetman wrote: >>>> diff --git a/mm/frontswap.c b/mm/frontswap.c >>>> index 1b24bdc..fae1160 100644 >>>> --- a/mm/frontswap.c >>>> +++ b/mm/frontswap.c >>>> @@ -327,15 +327,12 @@ EXPORT_SYMBOL(__frontswap_invalidate_area); >>>> >>>> static unsigned long __frontswap_curr_pages(void) >>>> { >>>> - int type; >>>> unsigned long totalpages = 0; >>>> struct swap_info_struct *si = NULL; >>>> >>>> assert_spin_locked(&swap_lock); >>>> - for (type = swap_list.head; type >= 0; type = si->next) { >>>> - si = swap_info[type]; >>>> + list_for_each_entry(si, &swap_list_head, list) >>>> totalpages += atomic_read(&si->frontswap_pages); >>>> - } >>>> return totalpages; >>>> } >>>> >>>> @@ -347,11 +344,9 @@ static int __frontswap_unuse_pages(unsigned long total, unsigned long *unused, >>>> int si_frontswap_pages; >>>> unsigned long total_pages_to_unuse = total; >>>> unsigned long pages = 0, pages_to_unuse = 0; >>>> - int type; >>>> >>>> assert_spin_locked(&swap_lock); >>>> - for (type = swap_list.head; type >= 0; type = si->next) { >>>> - si = swap_info[type]; >>>> + list_for_each_entry(si, &swap_list_head, list) { >>>> si_frontswap_pages = atomic_read(&si->frontswap_pages); >>>> if (total_pages_to_unuse < si_frontswap_pages) { >>>> pages = pages_to_unuse = total_pages_to_unuse; >>> >>> The frontswap shrink code looks suspicious. If the target is smaller than >>> the total number of frontswap pages then it does nothing. The callers >>> appear to get this right at least. Similarly, if the first swapfile has >>> fewer frontswap pages than the target then it does not unuse the target >>> number of pages because it only handles one swap file. It's outside the >>> scope of your patch to address this or wonder if xen balloon driver is >>> really using it the way it's expected. >> >> I didn't look into the frontswap shrinking code, but I agree the >> existing logic there doesn't look right. I'll review frontswap in >> more detail to see if it needs changing here, unless anyone else gets >> it to first :-) >> > > FYI, I drop the frontswap_shrink code in a patch > see: https://lkml.org/lkml/2014/1/27/98 frontswap_shrink() is actually used (only) by drivers/xen/xen-selfballoon.c. However, I completely agree with you that the backend should be doing the shrinking, not from a frontswap api. Forcing frontswap to shrink is backwards - xen-selfballoon appears to be assuming that xem/tmem is the only possible frontswap backend. It certainly doensn't make any sense for xen-selfballoon to force zswap to shrink itself, does it? If xen-selfballoon wants to shrink its frontswap backend tmem, it should do that by telling tmem directly to shrink itself (which it looks like tmem would have to implement, just like zswap sends its LRU pages back to swapcache when it becomes full).