From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.6 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_MUTT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C39EDC48BE2 for ; Thu, 20 Jun 2019 07:04:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 967A8208CB for ; Thu, 20 Jun 2019 07:04:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1561014290; bh=u9dfPRuXvIzrKU/vTN2a37+Ut2Vtr8pb9LzyqX4aR+4=; h=Date:From:To:Cc:Subject:References:In-Reply-To:List-ID:From; b=y4ipQyFkR4qWwIMJbNv9ZI07NIGd0YE8hojQ4yyMu9r1SjnHt+J8BfvwD9gsQDiwG t2mYAN5oeXRzOSMFXkfrFZN0W+6akI5ahygIJ9w08D7eYj0lvPjANBrdZZnDXLLAaF uFXxZD1JpOmovzTs8csWG9SGwIxz5pIRv0Qd5oyg= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730786AbfFTHEt (ORCPT ); Thu, 20 Jun 2019 03:04:49 -0400 Received: from mx2.suse.de ([195.135.220.15]:43250 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725872AbfFTHEt (ORCPT ); Thu, 20 Jun 2019 03:04:49 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id A60D3AE34; Thu, 20 Jun 2019 07:04:46 +0000 (UTC) Date: Thu, 20 Jun 2019 09:04:44 +0200 From: Michal Hocko To: Minchan Kim Cc: Andrew Morton , linux-mm , LKML , linux-api@vger.kernel.org, Johannes Weiner , Tim Murray , Joel Fernandes , Suren Baghdasaryan , Daniel Colascione , Shakeel Butt , Sonny Rao , Brian Geffon , jannh@google.com, oleg@redhat.com, christian@brauner.io, oleksandr@redhat.com, hdanton@sina.com, lizeb@google.com Subject: Re: [PATCH v2 4/5] mm: introduce MADV_PAGEOUT Message-ID: <20190620070444.GB12083@dhcp22.suse.cz> References: <20190610111252.239156-1-minchan@kernel.org> <20190610111252.239156-5-minchan@kernel.org> <20190619132450.GQ2968@dhcp22.suse.cz> <20190620041620.GB105727@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190620041620.GB105727@google.com> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu 20-06-19 13:16:20, Minchan Kim wrote: > On Wed, Jun 19, 2019 at 03:24:50PM +0200, Michal Hocko wrote: > > On Mon 10-06-19 20:12:51, Minchan Kim wrote: > > [...] > > > +static int madvise_pageout_pte_range(pmd_t *pmd, unsigned long addr, > > > + unsigned long end, struct mm_walk *walk) > > > > Again the same question about a potential code reuse... > > [...] > > > +regular_page: > > > + tlb_change_page_size(tlb, PAGE_SIZE); > > > + orig_pte = pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl); > > > + flush_tlb_batched_pending(mm); > > > + arch_enter_lazy_mmu_mode(); > > > + for (; addr < end; pte++, addr += PAGE_SIZE) { > > > + ptent = *pte; > > > + if (!pte_present(ptent)) > > > + continue; > > > + > > > + page = vm_normal_page(vma, addr, ptent); > > > + if (!page) > > > + continue; > > > + > > > + if (isolate_lru_page(page)) > > > + continue; > > > + > > > + isolated++; > > > + if (pte_young(ptent)) { > > > + ptent = ptep_get_and_clear_full(mm, addr, pte, > > > + tlb->fullmm); > > > + ptent = pte_mkold(ptent); > > > + set_pte_at(mm, addr, pte, ptent); > > > + tlb_remove_tlb_entry(tlb, pte, addr); > > > + } > > > + ClearPageReferenced(page); > > > + test_and_clear_page_young(page); > > > + list_add(&page->lru, &page_list); > > > + if (isolated >= SWAP_CLUSTER_MAX) { > > > > Why do we need SWAP_CLUSTER_MAX batching? Especially when we need ... > > [...] > > It aims for preventing early OOM kill since we isolate too many LRU > pages concurrently. This is a good point. For some reason I thought that we consider isolated pages in should_reclaim_retry but we do not anymore (since we move from zone to node LRUs I guess). Please stick a comment there. > > > +unsigned long reclaim_pages(struct list_head *page_list) > > > +{ > > > + int nid = -1; > > > + unsigned long nr_reclaimed = 0; > > > + LIST_HEAD(node_page_list); > > > + struct reclaim_stat dummy_stat; > > > + struct scan_control sc = { > > > + .gfp_mask = GFP_KERNEL, > > > + .priority = DEF_PRIORITY, > > > + .may_writepage = 1, > > > + .may_unmap = 1, > > > + .may_swap = 1, > > > + }; > > > + > > > + while (!list_empty(page_list)) { > > > + struct page *page; > > > + > > > + page = lru_to_page(page_list); > > > + if (nid == -1) { > > > + nid = page_to_nid(page); > > > + INIT_LIST_HEAD(&node_page_list); > > > + } > > > + > > > + if (nid == page_to_nid(page)) { > > > + list_move(&page->lru, &node_page_list); > > > + continue; > > > + } > > > + > > > + nr_reclaimed += shrink_page_list(&node_page_list, > > > + NODE_DATA(nid), > > > + &sc, 0, > > > + &dummy_stat, false); > > > > per-node batching in fact. Other than that nothing really jumped at me. > > Except for the shared page cache side channel timing aspect not being > > considered AFAICS. To be more specific. Pushing out a shared page cache > > is possible even now but this interface gives a much easier tool to > > evict shared state and perform all sorts of timing attacks. Unless I am > > missing something we should be doing something similar to mincore and > > ignore shared pages without a writeable access or at least document why > > we do not care. > > I'm not sure IIUC side channel attach. As you mentioned, without this syscall, > 1. they already can do that simply by memory hogging This is way much more harder for practical attacks because the reclaim logic is not fully under the attackers control. Having a direct tool to reclaim memory directly then just opens doors to measure the other consumers of that memory and all sorts of side channel. > 2. If we need fix MADV_PAGEOUT, that means we need to fix MADV_DONTNEED, too? nope because MADV_DONTNEED doesn't unmap from other processes. -- Michal Hocko SUSE Labs