All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Michal Hocko <mhocko@kernel.org>
Cc: Oleg Nesterov <oleg@redhat.com>,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
	Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>,
	David Rientjes <rientjes@google.com>,
	Vladimir Davydov <vdavydov@parallels.com>
Subject: Re: [PATCH 09/10] vhost, mm: make sure that oom_reaper doesn't reap memory read by vhost
Date: Tue, 23 Aug 2016 15:54:50 +0300	[thread overview]
Message-ID: <20160823155330-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <20160823090655.GA23583@dhcp22.suse.cz>

On Tue, Aug 23, 2016 at 11:06:55AM +0200, Michal Hocko wrote:
> On Tue 23-08-16 09:55:55, Michal Hocko wrote:
> > On Tue 23-08-16 00:01:23, Michael S. Tsirkin wrote:
> > [...]
> > > Actually, vhost net calls out to tun which does regular copy_from_iter.
> > > Returning 0 there will cause corrupted packets in the network: not a
> > > huge deal, but ugly.  And I don't think we want to annotate run and
> > > macvtap as well.
> > 
> > Hmm, OK, I wasn't aware of that path and being consistent here matters.
> > If the vhost driver can interact with other subsystems then there is
> > really no other option than hooking into the page fault path. Ohh well.
> 
> Here is a completely untested patch just for sanity check.
> ---
> >From f32711ea518f8151d6efb1c71f359211117dd5a2 Mon Sep 17 00:00:00 2001
> From: Michal Hocko <mhocko@suse.com>
> Date: Tue, 14 Jun 2016 09:33:06 +0200
> Subject: [PATCH] vhost, mm: make sure that oom_reaper doesn't reap memory read
>  by vhost
> 
> vhost driver relies on copy_from_user/get_user from a kernel thread.
> This makes it impossible to reap the memory of an oom victim which
> shares the mm with the vhost kernel thread because it could see a zero
> page unexpectedly and theoretically make an incorrect decision visible
> outside of the killed task context. To quote Michael S. Tsirkin:
> : Getting an error from __get_user and friends is handled gracefully.
> : Getting zero instead of a real value will cause userspace
> : memory corruption.
> 
> The vhost kernel thread is bound to an open fd of the vhost device which
> is not tight to the mm owner life cycle in theory. The fd can be
> inherited or passed over to another process which means that we really
> have to be careful about unexpected memory corruption because unlike for
> normal oom victims the result will be visible outside of the oom victim
> context.
> 
> Make sure that no kthread context (users of use_mm) can ever see
> corrupted data because of the oom reaper and hook into the page fault
> path by checking MMF_UNSTABLE mm flag. __oom_reap_task_mm will set the
> flag before it starts unmapping the address space while the flag is
> checked after the page fault has been handled. If the flag is set
> then SIGBUS is triggered so any g-u-p user will get a error code.
> 
> This patch shouldn't have any visible effect at this moment because the
> OOM killer doesn't invoke oom reaper for tasks with mm shared with
> kthreads yet.
> 
> Cc: "Michael S. Tsirkin" <mst@redhat.com>
> Signed-off-by: Michal Hocko <mhocko@suse.com>
> ---
>  include/linux/sched.h |  1 +
>  mm/memory.c           | 13 +++++++++++++
>  mm/oom_kill.c         |  8 ++++++++
>  3 files changed, 22 insertions(+)
> 
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index eda579f3283a..63acaf9cc51c 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -522,6 +522,7 @@ static inline int get_dumpable(struct mm_struct *mm)
>  #define MMF_HAS_UPROBES		19	/* has uprobes */
>  #define MMF_RECALC_UPROBES	20	/* MMF_HAS_UPROBES can be wrong */
>  #define MMF_OOM_SKIP		21	/* mm is of no interest for the OOM killer */
> +#define MMF_UNSTABLE		22	/* mm is unstable for copy_from_user */
>  
>  #define MMF_INIT_MASK		(MMF_DUMPABLE_MASK | MMF_DUMP_FILTER_MASK)
>  
> diff --git a/mm/memory.c b/mm/memory.c
> index 83be99d9d8a1..5c1df34fef64 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3656,6 +3656,19 @@ int handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
>                          mem_cgroup_oom_synchronize(false);
>  	}
>  
> +	/*
> +	 * This mm has been already reaped by the oom reaper and so the
> +	 * refault cannot be trusted in general. Anonymous refaults would
> +	 * lose data and give a zero page instead e.g. This is especially
> +	 * problem for use_mm() because regular tasks will just die and
> +	 * the corrupted data will not be visible anywhere while kthread
> +	 * will outlive the oom victim and potentially propagate the date
> +	 * further.
> +	 */
> +	if (unlikely((current->flags & PF_KTHREAD) && !(ret & VM_FAULT_ERROR)
> +				&& test_bit(MMF_UNSTABLE, &mm->flags)))
> +		ret = VM_FAULT_SIGBUS;
> +
>  	return ret;
>  }
>  EXPORT_SYMBOL_GPL(handle_mm_fault);
> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> index 3b990544db6d..5a3ba96c8338 100644
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c
> @@ -495,6 +495,14 @@ static bool __oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm)
>  		goto unlock_oom;
>  	}
>  
> +	/*
> +	 * Tell all users of get_user/copy_from_user etc... that the content
> +	 * is no longer stable. No barriers really needed because unmapping
> +	 * should imply barriers already and the reader would hit a page fault
> +	 * if it stumbled over a reaped memory.
> +	 */
> +	set_bit(MMF_UNSTABLE, &mm->flags);
> +
>  	tlb_gather_mmu(&tlb, mm, 0, -1);
>  	for (vma = mm->mmap ; vma; vma = vma->vm_next) {
>  		if (is_vm_hugetlb_page(vma))
> -- 
> 2.8.1

That's much better IMHO, and it's also much clearer why there's
no need for barriers here.

Acked-by: Michael S. Tsirkin <mst@redhat.com>



> -- 
> Michal Hocko
> SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2016-08-23 12:54 UTC|newest]

Thread overview: 47+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-07-28 19:42 [RFC PATCH 0/10] fortify oom killer even more Michal Hocko
2016-07-28 19:42 ` [PATCH 01/10] mm,oom_reaper: Reduce find_lock_task_mm() usage Michal Hocko
2016-07-28 19:42 ` [PATCH 02/10] mm,oom_reaper: Do not attempt to reap a task twice Michal Hocko
2016-07-28 19:42 ` [PATCH 03/10] oom: keep mm of the killed task available Michal Hocko
2016-07-28 19:42 ` [PATCH 04/10] mm, oom: get rid of signal_struct::oom_victims Michal Hocko
2016-07-28 19:42 ` [PATCH 05/10] kernel, oom: fix potential pgd_lock deadlock from __mmdrop Michal Hocko
2016-07-28 19:42 ` [PATCH 06/10] oom, suspend: fix oom_killer_disable vs. pm suspend properly Michal Hocko
2016-07-28 19:42 ` [PATCH 07/10] mm, oom: enforce exit_oom_victim on current task Michal Hocko
2016-07-28 19:42 ` [PATCH 08/10] exit, oom: postpone exit_oom_victim to later Michal Hocko
2016-07-30  8:20   ` Tetsuo Handa
2016-07-31  9:35     ` Michal Hocko
2016-07-31 10:19       ` Michal Hocko
2016-08-01 10:46       ` Tetsuo Handa
2016-08-01 11:33         ` Michal Hocko
2016-08-02 10:32           ` Tetsuo Handa
2016-08-02 11:31             ` Michal Hocko
2016-07-28 19:42 ` [PATCH 09/10] vhost, mm: make sure that oom_reaper doesn't reap memory read by vhost Michal Hocko
2016-07-28 20:41   ` Michael S. Tsirkin
2016-07-29  6:04     ` Michal Hocko
2016-07-29 13:14       ` Michael S. Tsirkin
2016-07-29 13:35         ` Michal Hocko
2016-07-29 17:57           ` Michael S. Tsirkin
2016-07-31  9:44             ` Michal Hocko
2016-08-12  9:42               ` Michal Hocko
2016-08-12 13:21                 ` Oleg Nesterov
2016-08-12 14:41                   ` Michal Hocko
2016-08-12 16:05                     ` Oleg Nesterov
2016-08-12 15:57                   ` Paul E. McKenney
2016-08-12 16:09                     ` Oleg Nesterov
2016-08-12 16:26                       ` Paul E. McKenney
2016-08-12 16:23                     ` Michal Hocko
2016-08-13  0:15                   ` Michael S. Tsirkin
2016-08-14  8:41                     ` Michal Hocko
2016-08-14 16:57                       ` Michael S. Tsirkin
2016-08-14 23:06                         ` Michael S. Tsirkin
2016-08-15  9:49                           ` Michal Hocko
2016-08-17 16:58                             ` Michal Hocko
2016-08-22 13:03                   ` Michal Hocko
2016-08-22 21:01                     ` Michael S. Tsirkin
2016-08-23  7:55                       ` Michal Hocko
2016-08-23  9:06                         ` Michal Hocko
2016-08-23 12:54                           ` Michael S. Tsirkin [this message]
2016-08-24 16:42                           ` Michal Hocko
2016-08-12  9:43         ` Michal Hocko
2016-07-29 17:07   ` Oleg Nesterov
2016-07-31  9:11     ` Michal Hocko
2016-07-28 19:42 ` [PATCH 10/10] oom, oom_reaper: allow to reap mm shared by the kthreads Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160823155330-mutt-send-email-mst@kernel.org \
    --to=mst@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=oleg@redhat.com \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=penguin-kernel@I-love.SAKURA.ne.jp \
    --cc=rientjes@google.com \
    --cc=vdavydov@parallels.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.