linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@kernel.org>
To: Minchan Kim <minchan@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	linux-mm <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>,
	linux-api@vger.kernel.org, Johannes Weiner <hannes@cmpxchg.org>,
	Tim Murray <timmurray@google.com>,
	Joel Fernandes <joel@joelfernandes.org>,
	Suren Baghdasaryan <surenb@google.com>,
	Daniel Colascione <dancol@google.com>,
	Shakeel Butt <shakeelb@google.com>,
	Sonny Rao <sonnyrao@google.com>,
	Brian Geffon <bgeffon@google.com>,
	jannh@google.com, oleg@redhat.com, christian@brauner.io,
	oleksandr@redhat.com, hdanton@sina.com, lizeb@google.com
Subject: Re: [PATCH v2 4/5] mm: introduce MADV_PAGEOUT
Date: Thu, 20 Jun 2019 09:04:44 +0200	[thread overview]
Message-ID: <20190620070444.GB12083@dhcp22.suse.cz> (raw)
In-Reply-To: <20190620041620.GB105727@google.com>

On Thu 20-06-19 13:16:20, Minchan Kim wrote:
> On Wed, Jun 19, 2019 at 03:24:50PM +0200, Michal Hocko wrote:
> > On Mon 10-06-19 20:12:51, Minchan Kim wrote:
> > [...]
> > > +static int madvise_pageout_pte_range(pmd_t *pmd, unsigned long addr,
> > > +				unsigned long end, struct mm_walk *walk)
> > 
> > Again the same question about a potential code reuse...
> > [...]
> > > +regular_page:
> > > +	tlb_change_page_size(tlb, PAGE_SIZE);
> > > +	orig_pte = pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
> > > +	flush_tlb_batched_pending(mm);
> > > +	arch_enter_lazy_mmu_mode();
> > > +	for (; addr < end; pte++, addr += PAGE_SIZE) {
> > > +		ptent = *pte;
> > > +		if (!pte_present(ptent))
> > > +			continue;
> > > +
> > > +		page = vm_normal_page(vma, addr, ptent);
> > > +		if (!page)
> > > +			continue;
> > > +
> > > +		if (isolate_lru_page(page))
> > > +			continue;
> > > +
> > > +		isolated++;
> > > +		if (pte_young(ptent)) {
> > > +			ptent = ptep_get_and_clear_full(mm, addr, pte,
> > > +							tlb->fullmm);
> > > +			ptent = pte_mkold(ptent);
> > > +			set_pte_at(mm, addr, pte, ptent);
> > > +			tlb_remove_tlb_entry(tlb, pte, addr);
> > > +		}
> > > +		ClearPageReferenced(page);
> > > +		test_and_clear_page_young(page);
> > > +		list_add(&page->lru, &page_list);
> > > +		if (isolated >= SWAP_CLUSTER_MAX) {
> > 
> > Why do we need SWAP_CLUSTER_MAX batching? Especially when we need ...
> > [...]
> 
> It aims for preventing early OOM kill since we isolate too many LRU
> pages concurrently.

This is a good point. For some reason I thought that we consider
isolated pages in should_reclaim_retry but we do not anymore (since we
move from zone to node LRUs I guess). Please stick a comment there.

> > > +unsigned long reclaim_pages(struct list_head *page_list)
> > > +{
> > > +	int nid = -1;
> > > +	unsigned long nr_reclaimed = 0;
> > > +	LIST_HEAD(node_page_list);
> > > +	struct reclaim_stat dummy_stat;
> > > +	struct scan_control sc = {
> > > +		.gfp_mask = GFP_KERNEL,
> > > +		.priority = DEF_PRIORITY,
> > > +		.may_writepage = 1,
> > > +		.may_unmap = 1,
> > > +		.may_swap = 1,
> > > +	};
> > > +
> > > +	while (!list_empty(page_list)) {
> > > +		struct page *page;
> > > +
> > > +		page = lru_to_page(page_list);
> > > +		if (nid == -1) {
> > > +			nid = page_to_nid(page);
> > > +			INIT_LIST_HEAD(&node_page_list);
> > > +		}
> > > +
> > > +		if (nid == page_to_nid(page)) {
> > > +			list_move(&page->lru, &node_page_list);
> > > +			continue;
> > > +		}
> > > +
> > > +		nr_reclaimed += shrink_page_list(&node_page_list,
> > > +						NODE_DATA(nid),
> > > +						&sc, 0,
> > > +						&dummy_stat, false);
> > 
> > per-node batching in fact. Other than that nothing really jumped at me.
> > Except for the shared page cache side channel timing aspect not being
> > considered AFAICS. To be more specific. Pushing out a shared page cache
> > is possible even now but this interface gives a much easier tool to
> > evict shared state and perform all sorts of timing attacks. Unless I am
> > missing something we should be doing something similar to mincore and
> > ignore shared pages without a writeable access or at least document why
> > we do not care.
> 
> I'm not sure IIUC side channel attach. As you mentioned, without this syscall,
> 1. they already can do that simply by memory hogging

This is way much more harder for practical attacks because the reclaim
logic is not fully under the attackers control. Having a direct tool to
reclaim memory directly then just opens doors to measure the other
consumers of that memory and all sorts of side channel.

> 2. If we need fix MADV_PAGEOUT, that means we need to fix MADV_DONTNEED, too?

nope because MADV_DONTNEED doesn't unmap from other processes.
-- 
Michal Hocko
SUSE Labs

  reply	other threads:[~2019-06-20  7:04 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-10 11:12 [PATCH v2 0/5] Introduce MADV_COLD and MADV_PAGEOUT Minchan Kim
2019-06-10 11:12 ` [PATCH v2 1/5] mm: introduce MADV_COLD Minchan Kim
2019-06-19 12:56   ` Michal Hocko
2019-06-20  0:06     ` Minchan Kim
2019-06-20  7:08       ` Michal Hocko
2019-06-20  8:44         ` Minchan Kim
2019-06-10 11:12 ` [PATCH v2 2/5] mm: change PAGEREF_RECLAIM_CLEAN with PAGE_REFRECLAIM Minchan Kim
2019-06-19 13:09   ` Michal Hocko
2019-06-10 11:12 ` [PATCH v2 3/5] mm: account nr_isolated_xxx in [isolate|putback]_lru_page Minchan Kim
2019-06-10 11:12 ` [PATCH v2 4/5] mm: introduce MADV_PAGEOUT Minchan Kim
2019-06-19 13:24   ` Michal Hocko
2019-06-20  4:16     ` Minchan Kim
2019-06-20  7:04       ` Michal Hocko [this message]
2019-06-20  8:40         ` Minchan Kim
2019-06-20  9:22           ` Michal Hocko
2019-06-20 10:32             ` Minchan Kim
2019-06-20 10:55               ` Michal Hocko
2019-06-10 11:12 ` [PATCH v2 5/5] mm: factor out pmd young/dirty bit handling and THP split Minchan Kim
2019-06-10 18:03 ` [PATCH v2 0/5] Introduce MADV_COLD and MADV_PAGEOUT Dave Hansen
2019-06-13  4:51   ` Minchan Kim
2019-06-12 10:59 ` Pavel Machek
2019-06-12 11:19   ` Oleksandr Natalenko
2019-06-12 11:37     ` Pavel Machek
2019-06-19 12:27 ` Michal Hocko
2019-06-19 23:42   ` Minchan Kim

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190620070444.GB12083@dhcp22.suse.cz \
    --to=mhocko@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=bgeffon@google.com \
    --cc=christian@brauner.io \
    --cc=dancol@google.com \
    --cc=hannes@cmpxchg.org \
    --cc=hdanton@sina.com \
    --cc=jannh@google.com \
    --cc=joel@joelfernandes.org \
    --cc=linux-api@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lizeb@google.com \
    --cc=minchan@kernel.org \
    --cc=oleg@redhat.com \
    --cc=oleksandr@redhat.com \
    --cc=shakeelb@google.com \
    --cc=sonnyrao@google.com \
    --cc=surenb@google.com \
    --cc=timmurray@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).