linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: "linux-nvdimm@lists.01.org" <linux-nvdimm@lists.01.org>,
	"hch@lst.de" <hch@lst.de>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"jack@suse.cz" <jack@suse.cz>,
	"ross.zwisler@linux.intel.com" <ross.zwisler@linux.intel.com>
Subject: Re: [PATCH v5 06/11] mm, memory_failure: Collect mapping size in collect_procs()
Date: Fri, 13 Jul 2018 06:49:16 +0000	[thread overview]
Message-ID: <20180713064916.GB10034@hori1.linux.bs1.fc.nec.co.jp> (raw)
In-Reply-To: <153074045526.27838.11460088022513024933.stgit@dwillia2-desk3.amr.corp.intel.com>

On Wed, Jul 04, 2018 at 02:40:55PM -0700, Dan Williams wrote:
> In preparation for supporting memory_failure() for dax mappings, teach
> collect_procs() to also determine the mapping size. Unlike typical
> mappings the dax mapping size is determined by walking page-table
> entries rather than using the compound-page accounting for THP pages.
> 
> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>

Looks good to me.

Acked-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>

> ---
>  mm/memory-failure.c |   81 +++++++++++++++++++++++++--------------------------
>  1 file changed, 40 insertions(+), 41 deletions(-)
> 
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index 9d142b9b86dc..4d70753af59c 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -174,22 +174,51 @@ int hwpoison_filter(struct page *p)
>  EXPORT_SYMBOL_GPL(hwpoison_filter);
>  
>  /*
> + * Kill all processes that have a poisoned page mapped and then isolate
> + * the page.
> + *
> + * General strategy:
> + * Find all processes having the page mapped and kill them.
> + * But we keep a page reference around so that the page is not
> + * actually freed yet.
> + * Then stash the page away
> + *
> + * There's no convenient way to get back to mapped processes
> + * from the VMAs. So do a brute-force search over all
> + * running processes.
> + *
> + * Remember that machine checks are not common (or rather
> + * if they are common you have other problems), so this shouldn't
> + * be a performance issue.
> + *
> + * Also there are some races possible while we get from the
> + * error detection to actually handle it.
> + */
> +
> +struct to_kill {
> +	struct list_head nd;
> +	struct task_struct *tsk;
> +	unsigned long addr;
> +	short size_shift;
> +	char addr_valid;
> +};
> +
> +/*
>   * Send all the processes who have the page mapped a signal.
>   * ``action optional'' if they are not immediately affected by the error
>   * ``action required'' if error happened in current execution context
>   */
> -static int kill_proc(struct task_struct *t, unsigned long addr,
> -			unsigned long pfn, struct page *page, int flags)
> +static int kill_proc(struct to_kill *tk, unsigned long pfn, int flags)
>  {
> -	short addr_lsb;
> +	struct task_struct *t = tk->tsk;
> +	short addr_lsb = tk->size_shift;
>  	int ret;
>  
>  	pr_err("Memory failure: %#lx: Killing %s:%d due to hardware memory corruption\n",
>  		pfn, t->comm, t->pid);
> -	addr_lsb = compound_order(compound_head(page)) + PAGE_SHIFT;
>  
>  	if ((flags & MF_ACTION_REQUIRED) && t->mm == current->mm) {
> -		ret = force_sig_mceerr(BUS_MCEERR_AR, (void __user *)addr,
> +		ret = force_sig_mceerr(BUS_MCEERR_AR, (void __user *)tk->addr,
>  				       addr_lsb, current);
>  	} else {
>  		/*
> @@ -198,7 +227,7 @@ static int kill_proc(struct task_struct *t, unsigned long addr,
>  		 * This could cause a loop when the user sets SIGBUS
>  		 * to SIG_IGN, but hopefully no one will do that?
>  		 */
> -		ret = send_sig_mceerr(BUS_MCEERR_AO, (void __user *)addr,
> +		ret = send_sig_mceerr(BUS_MCEERR_AO, (void __user *)tk->addr,
>  				      addr_lsb, t);  /* synchronous? */
>  	}
>  	if (ret < 0)
> @@ -235,35 +264,6 @@ void shake_page(struct page *p, int access)
>  EXPORT_SYMBOL_GPL(shake_page);
>  
>  /*
> - * Kill all processes that have a poisoned page mapped and then isolate
> - * the page.
> - *
> - * General strategy:
> - * Find all processes having the page mapped and kill them.
> - * But we keep a page reference around so that the page is not
> - * actually freed yet.
> - * Then stash the page away
> - *
> - * There's no convenient way to get back to mapped processes
> - * from the VMAs. So do a brute-force search over all
> - * running processes.
> - *
> - * Remember that machine checks are not common (or rather
> - * if they are common you have other problems), so this shouldn't
> - * be a performance issue.
> - *
> - * Also there are some races possible while we get from the
> - * error detection to actually handle it.
> - */
> -
> -struct to_kill {
> -	struct list_head nd;
> -	struct task_struct *tsk;
> -	unsigned long addr;
> -	char addr_valid;
> -};
> -
> -/*
>   * Failure handling: if we can't find or can't kill a process there's
>   * not much we can do.	We just print a message and ignore otherwise.
>   */
> @@ -292,6 +292,7 @@ static void add_to_kill(struct task_struct *tsk, struct page *p,
>  	}
>  	tk->addr = page_address_in_vma(p, vma);
>  	tk->addr_valid = 1;
> +	tk->size_shift = compound_order(compound_head(p)) + PAGE_SHIFT;
>  
>  	/*
>  	 * In theory we don't have to kill when the page was
> @@ -317,9 +318,8 @@ static void add_to_kill(struct task_struct *tsk, struct page *p,
>   * Also when FAIL is set do a force kill because something went
>   * wrong earlier.
>   */
> -static void kill_procs(struct list_head *to_kill, int forcekill,
> -			  bool fail, struct page *page, unsigned long pfn,
> -			  int flags)
> +static void kill_procs(struct list_head *to_kill, int forcekill, bool fail,
> +		unsigned long pfn, int flags)
>  {
>  	struct to_kill *tk, *next;
>  
> @@ -342,8 +342,7 @@ static void kill_procs(struct list_head *to_kill, int forcekill,
>  			 * check for that, but we need to tell the
>  			 * process anyways.
>  			 */
> -			else if (kill_proc(tk->tsk, tk->addr,
> -					      pfn, page, flags) < 0)
> +			else if (kill_proc(tk, pfn, flags) < 0)
>  				pr_err("Memory failure: %#lx: Cannot send advisory machine check signal to %s:%d\n",
>  				       pfn, tk->tsk->comm, tk->tsk->pid);
>  		}
> @@ -1012,7 +1011,7 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn,
>  	 * any accesses to the poisoned memory.
>  	 */
>  	forcekill = PageDirty(hpage) || (flags & MF_MUST_KILL);
> -	kill_procs(&tokill, forcekill, !unmap_success, p, pfn, flags);
> +	kill_procs(&tokill, forcekill, !unmap_success, pfn, flags);
>  
>  	return unmap_success;
>  }
> 
> 

  reply	other threads:[~2018-07-13  6:49 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-07-04 21:40 [PATCH v5 00/11] mm: Teach memory_failure() about ZONE_DEVICE pages Dan Williams
2018-07-04 21:40 ` [PATCH v5 01/11] device-dax: Convert to vmf_insert_mixed and vm_fault_t Dan Williams
2018-07-04 21:40 ` [PATCH v5 02/11] device-dax: Enable page_mapping() Dan Williams
2018-07-04 21:40 ` [PATCH v5 03/11] device-dax: Set page->index Dan Williams
2018-07-04 21:40 ` [PATCH v5 04/11] filesystem-dax: " Dan Williams
2018-07-04 21:40 ` [PATCH v5 05/11] mm, madvise_inject_error: Let memory_failure() optionally take a page reference Dan Williams
2018-07-13  6:31   ` Naoya Horiguchi
2018-07-14  0:34     ` Dan Williams
2018-07-04 21:40 ` [PATCH v5 06/11] mm, memory_failure: Collect mapping size in collect_procs() Dan Williams
2018-07-13  6:49   ` Naoya Horiguchi [this message]
2018-07-04 21:41 ` [PATCH v5 07/11] filesystem-dax: Introduce dax_lock_mapping_entry() Dan Williams
2018-07-05  1:07   ` kbuild test robot
2018-07-05  3:31   ` kbuild test robot
2018-07-05  3:33   ` [PATCH v6] " Dan Williams
2018-09-24 15:57   ` [PATCH v5 07/11] " Barret Rhoden
2018-09-27 11:13     ` Jan Kara
2018-07-04 21:41 ` [PATCH v5 08/11] mm, memory_failure: Teach memory_failure() about dev_pagemap pages Dan Williams
2018-07-13  8:52   ` Naoya Horiguchi
2018-07-14  0:28     ` Dan Williams
2018-07-17  6:36       ` Naoya Horiguchi
2018-07-04 21:41 ` [PATCH v5 09/11] x86/mm/pat: Prepare {reserve, free}_memtype() for "decoy" addresses Dan Williams
2018-07-04 21:41 ` [PATCH v5 10/11] x86/memory_failure: Introduce {set, clear}_mce_nospec() Dan Williams
2018-07-04 21:41 ` [PATCH v5 11/11] libnvdimm, pmem: Restore page attributes when clearing errors Dan Williams
2018-07-13  4:44 ` [PATCH v5 00/11] mm: Teach memory_failure() about ZONE_DEVICE pages Dan Williams

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180713064916.GB10034@hori1.linux.bs1.fc.nec.co.jp \
    --to=n-horiguchi@ah.jp.nec.com \
    --cc=dan.j.williams@intel.com \
    --cc=hch@lst.de \
    --cc=jack@suse.cz \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-nvdimm@lists.01.org \
    --cc=ross.zwisler@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).