From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756679AbdLVQOx (ORCPT ); Fri, 22 Dec 2017 11:14:53 -0500 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:60598 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1756565AbdLVQOu (ORCPT ); Fri, 22 Dec 2017 11:14:50 -0500 Date: Fri, 22 Dec 2017 08:14:47 -0800 From: "Paul E. McKenney" To: "Huang, Ying" Cc: Minchan Kim , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Hugh Dickins , Johannes Weiner , Tim Chen , Shaohua Li , Mel Gorman , =?utf-8?B?Su+/vXLvv71tZQ==?= Glisse , Michal Hocko , Andrea Arcangeli , David Rientjes , Rik van Riel , Jan Kara , Dave Jiang , Aaron Lu Subject: Re: [PATCH -V4 -mm] mm, swap: Fix race between swapoff and some swap operations Reply-To: paulmck@linux.vnet.ibm.com References: <20171220012632.26840-1-ying.huang@intel.com> <20171221021619.GA27475@bbox> <871sjopllj.fsf@yhuang-dev.intel.com> <20171221235813.GA29033@bbox> <87r2rmj1d8.fsf@yhuang-dev.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <87r2rmj1d8.fsf@yhuang-dev.intel.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-GCONF: 00 x-cbid: 17122216-0024-0000-0000-00000305F5F7 X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00008244; HX=3.00000241; KW=3.00000007; PH=3.00000004; SC=3.00000244; SDB=6.00963983; UDB=6.00487729; IPR=6.00743970; BA=6.00005755; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZP=6.00000000; ZH=6.00000000; ZU=6.00000002; MB=3.00018677; XFM=3.00000015; UTC=2017-12-22 16:14:49 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17122216-0025-0000-0000-000046649DF1 Message-Id: <20171222161447.GF7829@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2017-12-22_06:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=2 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 impostorscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1709140000 definitions=main-1712220229 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Dec 22, 2017 at 10:14:43PM +0800, Huang, Ying wrote: > Minchan Kim writes: > > > On Thu, Dec 21, 2017 at 03:48:56PM +0800, Huang, Ying wrote: > >> Minchan Kim writes: > >> > >> > On Wed, Dec 20, 2017 at 09:26:32AM +0800, Huang, Ying wrote: > >> >> From: Huang Ying > >> >> > >> >> When the swapin is performed, after getting the swap entry information > >> >> from the page table, system will swap in the swap entry, without any > >> >> lock held to prevent the swap device from being swapoff. This may > >> >> cause the race like below, > >> >> > >> >> CPU 1 CPU 2 > >> >> ----- ----- > >> >> do_swap_page > >> >> swapin_readahead > >> >> __read_swap_cache_async > >> >> swapoff swapcache_prepare > >> >> p->swap_map = NULL __swap_duplicate > >> >> p->swap_map[?] /* !!! NULL pointer access */ > >> >> > >> >> Because swapoff is usually done when system shutdown only, the race > >> >> may not hit many people in practice. But it is still a race need to > >> >> be fixed. > >> >> > >> >> To fix the race, get_swap_device() is added to check whether the > >> >> specified swap entry is valid in its swap device. If so, it will keep > >> >> the swap entry valid via preventing the swap device from being > >> >> swapoff, until put_swap_device() is called. > >> >> > >> >> Because swapoff() is very race code path, to make the normal path runs > >> >> as fast as possible, RCU instead of reference count is used to > >> >> implement get/put_swap_device(). From get_swap_device() to > >> >> put_swap_device(), the RCU read lock is held, so synchronize_rcu() in > >> >> swapoff() will wait until put_swap_device() is called. > >> >> > >> >> In addition to swap_map, cluster_info, etc. data structure in the > >> >> struct swap_info_struct, the swap cache radix tree will be freed after > >> >> swapoff, so this patch fixes the race between swap cache looking up > >> >> and swapoff too. > >> >> > >> >> Cc: Hugh Dickins > >> >> Cc: Paul E. McKenney > >> >> Cc: Minchan Kim > >> >> Cc: Johannes Weiner > >> >> Cc: Tim Chen > >> >> Cc: Shaohua Li > >> >> Cc: Mel Gorman > >> >> Cc: "Jrme Glisse" > >> >> Cc: Michal Hocko > >> >> Cc: Andrea Arcangeli > >> >> Cc: David Rientjes > >> >> Cc: Rik van Riel > >> >> Cc: Jan Kara > >> >> Cc: Dave Jiang > >> >> Cc: Aaron Lu > >> >> Signed-off-by: "Huang, Ying" > >> >> > >> >> Changelog: > >> >> > >> >> v4: > >> >> > >> >> - Use synchronize_rcu() in enable_swap_info() to reduce overhead of > >> >> normal paths further. > >> > > >> > Hi Huang, > >> > >> Hi, Minchan, > >> > >> > This version is much better than old. To me, it's due to not rcu, > >> > srcu, refcount thing but it adds swap device dependency(i.e., get/put) > >> > into every swap related functions so users who don't interested on swap > >> > don't need to care of it. Good. > >> > > >> > The problem is caused by freeing by swap related-data structure > >> > *dynamically* while old swap logic was based on static data > >> > structure(i.e., never freed and the verify it's stale). > >> > So, I reviewed some places where use PageSwapCache and swp_entry_t > >> > which could make access of swap related data structures. > >> > > >> > A example is __isolate_lru_page > >> > > >> > It calls page_mapping to get a address_space. > >> > What happens if the page is on SwapCache and raced with swapoff? > >> > The mapping got could be disappeared by the race. Right? > >> > >> Yes. We should think about that. Considering the file cache pages, the > >> address_space backing the file cache pages may be freed dynamically too. > >> So to use page_mapping() return value for the file cache pages, some > >> kind of locking is needed to guarantee the address_space isn't freed > >> under us. Page may be locked, or under writeback, or some other locks > > > > I didn't look at the code in detail but I guess every file page should > > be freed before the address space destruction and page_lock/lru_lock makes > > the work safe, I guess. So, it wouldn't be a problem. > > > > However, in case of swapoff, it doesn't remove pages from LRU list > > so there is no lock to prevent the race at this moment. :( > > Take a look at file cache pages and file cache address_space freeing > code path. It appears that similar situation is possible for them too. > > The file cache pages will be delete from file cache address_space before > address_space (embedded in inode) is freed. But they will be deleted > from LRU list only when its refcount dropped to zero, please take a look > at put_page() and release_pages(). While address_space will be freed > after putting reference to all file cache pages. If someone holds a > reference to a file cache page for quite long time, it is possible for a > file cache page to be in LRU list after the inode/address_space is > freed. > > And I found inode/address_space is freed witch call_rcu(). I don't know > whether this is related to page_mapping(). > > This is just my understanding. > > >> need to be held, for example, page table lock, or lru_lock, etc. For > >> __isolate_lru_page(), lru_lock will be held when it is called. And we > >> will call synchronize_rcu() between clear PageSwapCache and free swap > >> cache, so the usage of swap cache in __isolate_lru_page() should be > >> safe. Do you think my analysis makes sense? > > > > I don't understand how synchronize_rcu closes the race with spin_lock. > > Paul might help it. > > Per my understanding, spin_lock() will preempt_disable(), so > synchronize_rcu() will wait until spin_unlock() is called. Only when CONFIG_PREEMPT=n! In CONFIG_PREEMPT=y kernels, preempt_disable() won't necessarily prevent synchronize_rcu() from completing. Now, preempt_disable() does prevent synchronize_sched() from completing, but that would require changing the rcu_read_lock() and rcu_read_unlock() to rcu_read_lock_sched()/rcu_read_unlock_sched() or preempt_enable()/preempt_disable(). Another fix would be to invoke rcu_read_lock() just after acquiring the spinlock and rcu_read_unlock() just before releasing it. Thanx, Paul > > Even if we solve it, there is a other problem I spot. > > When I see migrate_vma_pages, it pass mapping to migrate_page which > > accesses mapping->tree_lock unconditionally even though the address_space > > is already gone. > > Before migrate_vma_pages() is called, migrate_vma_prepare() is called, > where pages are locked. So it is safe. > > > Hmm, I didn't check all sites where uses PageSwapCache, swp_entry_t > > but gut feeling is it would be not simple. > > Yes. We should check all sites. Thanks for your help! > > Best Regards, > Huang, Ying >