All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] mm: Fix races between address_space dereference and free in page_evicatable
@ 2018-02-12  8:12 ` Huang, Ying
  0 siblings, 0 replies; 15+ messages in thread
From: Huang, Ying @ 2018-02-12  8:12 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Huang Ying, Mel Gorman, Minchan Kim,
	Jan Kara, Johannes Weiner, Michal Hocko

From: Huang Ying <ying.huang@intel.com>

When page_mapping() is called and the mapping is dereferenced in
page_evicatable() through shrink_active_list(), it is possible for the
inode to be truncated and the embedded address space to be freed at
the same time.  This may lead to the following race.

CPU1                                                CPU2

truncate(inode)                                     shrink_active_list()
  ...                                                 page_evictable(page)
  truncate_inode_page(mapping, page);
    delete_from_page_cache(page)
      spin_lock_irqsave(&mapping->tree_lock, flags);
        __delete_from_page_cache(page, NULL)
          page_cache_tree_delete(..)
            ...                                         mapping = page_mapping(page);
            page->mapping = NULL;
            ...
      spin_unlock_irqrestore(&mapping->tree_lock, flags);
      page_cache_free_page(mapping, page)
        put_page(page)
          if (put_page_testzero(page)) -> false
- inode now has no pages and can be freed including embedded address_space

                                                        mapping_unevictable(mapping)
							  test_bit(AS_UNEVICTABLE, &mapping->flags);
- we've dereferenced mapping which is potentially already free.

Similar race exists between swap cache freeing and page_evicatable() too.

The address_space in inode and swap cache will be freed after a RCU
grace period.  So the races are fixed via enclosing the page_mapping()
and address_space usage in rcu_read_lock/unlock().  Some comments are
added in code to make it clear what is protected by the RCU read lock.

Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Minchan Kim <minchan@kernel.org>
Cc: "Huang, Ying" <ying.huang@intel.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.com>
---
 mm/vmscan.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index d1c1e00b08bb..10a0f32a3f90 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -3886,7 +3886,13 @@ int node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned int order)
  */
 int page_evictable(struct page *page)
 {
-	return !mapping_unevictable(page_mapping(page)) && !PageMlocked(page);
+	int ret;
+
+	/* Prevent address_space of inode and swap cache from being freed */
+	rcu_read_lock();
+	ret = !mapping_unevictable(page_mapping(page)) && !PageMlocked(page);
+	rcu_read_unlock();
+	return ret;
 }
 
 #ifdef CONFIG_SHMEM
-- 
2.15.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH] mm: Fix races between address_space dereference and free in page_evicatable
@ 2018-02-12  8:12 ` Huang, Ying
  0 siblings, 0 replies; 15+ messages in thread
From: Huang, Ying @ 2018-02-12  8:12 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Huang Ying, Mel Gorman, Minchan Kim,
	Jan Kara, Johannes Weiner, Michal Hocko

From: Huang Ying <ying.huang@intel.com>

When page_mapping() is called and the mapping is dereferenced in
page_evicatable() through shrink_active_list(), it is possible for the
inode to be truncated and the embedded address space to be freed at
the same time.  This may lead to the following race.

CPU1                                                CPU2

truncate(inode)                                     shrink_active_list()
  ...                                                 page_evictable(page)
  truncate_inode_page(mapping, page);
    delete_from_page_cache(page)
      spin_lock_irqsave(&mapping->tree_lock, flags);
        __delete_from_page_cache(page, NULL)
          page_cache_tree_delete(..)
            ...                                         mapping = page_mapping(page);
            page->mapping = NULL;
            ...
      spin_unlock_irqrestore(&mapping->tree_lock, flags);
      page_cache_free_page(mapping, page)
        put_page(page)
          if (put_page_testzero(page)) -> false
- inode now has no pages and can be freed including embedded address_space

                                                        mapping_unevictable(mapping)
							  test_bit(AS_UNEVICTABLE, &mapping->flags);
- we've dereferenced mapping which is potentially already free.

Similar race exists between swap cache freeing and page_evicatable() too.

The address_space in inode and swap cache will be freed after a RCU
grace period.  So the races are fixed via enclosing the page_mapping()
and address_space usage in rcu_read_lock/unlock().  Some comments are
added in code to make it clear what is protected by the RCU read lock.

Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Minchan Kim <minchan@kernel.org>
Cc: "Huang, Ying" <ying.huang@intel.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.com>
---
 mm/vmscan.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index d1c1e00b08bb..10a0f32a3f90 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -3886,7 +3886,13 @@ int node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned int order)
  */
 int page_evictable(struct page *page)
 {
-	return !mapping_unevictable(page_mapping(page)) && !PageMlocked(page);
+	int ret;
+
+	/* Prevent address_space of inode and swap cache from being freed */
+	rcu_read_lock();
+	ret = !mapping_unevictable(page_mapping(page)) && !PageMlocked(page);
+	rcu_read_unlock();
+	return ret;
 }
 
 #ifdef CONFIG_SHMEM
-- 
2.15.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH] mm: Fix races between address_space dereference and free in page_evicatable
  2018-02-12  8:12 ` Huang, Ying
@ 2018-02-15  9:18   ` Jan Kara
  -1 siblings, 0 replies; 15+ messages in thread
From: Jan Kara @ 2018-02-15  9:18 UTC (permalink / raw)
  To: Huang, Ying
  Cc: Andrew Morton, linux-mm, linux-kernel, Mel Gorman, Minchan Kim,
	Jan Kara, Johannes Weiner, Michal Hocko

On Mon 12-02-18 16:12:27, Huang, Ying wrote:
> From: Huang Ying <ying.huang@intel.com>
> 
> When page_mapping() is called and the mapping is dereferenced in
> page_evicatable() through shrink_active_list(), it is possible for the
> inode to be truncated and the embedded address space to be freed at
> the same time.  This may lead to the following race.
> 
> CPU1                                                CPU2
> 
> truncate(inode)                                     shrink_active_list()
>   ...                                                 page_evictable(page)
>   truncate_inode_page(mapping, page);
>     delete_from_page_cache(page)
>       spin_lock_irqsave(&mapping->tree_lock, flags);
>         __delete_from_page_cache(page, NULL)
>           page_cache_tree_delete(..)
>             ...                                         mapping = page_mapping(page);
>             page->mapping = NULL;
>             ...
>       spin_unlock_irqrestore(&mapping->tree_lock, flags);
>       page_cache_free_page(mapping, page)
>         put_page(page)
>           if (put_page_testzero(page)) -> false
> - inode now has no pages and can be freed including embedded address_space
> 
>                                                         mapping_unevictable(mapping)
> 							  test_bit(AS_UNEVICTABLE, &mapping->flags);
> - we've dereferenced mapping which is potentially already free.
> 
> Similar race exists between swap cache freeing and page_evicatable() too.
> 
> The address_space in inode and swap cache will be freed after a RCU
> grace period.  So the races are fixed via enclosing the page_mapping()
> and address_space usage in rcu_read_lock/unlock().  Some comments are
> added in code to make it clear what is protected by the RCU read lock.
> 
> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Minchan Kim <minchan@kernel.org>
> Cc: "Huang, Ying" <ying.huang@intel.com>
> Cc: Jan Kara <jack@suse.cz>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Michal Hocko <mhocko@suse.com>

The race looks real (although very unlikely) and the patch looks good to me.
You can add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza

> ---
>  mm/vmscan.c | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index d1c1e00b08bb..10a0f32a3f90 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -3886,7 +3886,13 @@ int node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned int order)
>   */
>  int page_evictable(struct page *page)
>  {
> -	return !mapping_unevictable(page_mapping(page)) && !PageMlocked(page);
> +	int ret;
> +
> +	/* Prevent address_space of inode and swap cache from being freed */
> +	rcu_read_lock();
> +	ret = !mapping_unevictable(page_mapping(page)) && !PageMlocked(page);
> +	rcu_read_unlock();
> +	return ret;
>  }
>  
>  #ifdef CONFIG_SHMEM
> -- 
> 2.15.1
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] mm: Fix races between address_space dereference and free in page_evicatable
@ 2018-02-15  9:18   ` Jan Kara
  0 siblings, 0 replies; 15+ messages in thread
From: Jan Kara @ 2018-02-15  9:18 UTC (permalink / raw)
  To: Huang, Ying
  Cc: Andrew Morton, linux-mm, linux-kernel, Mel Gorman, Minchan Kim,
	Jan Kara, Johannes Weiner, Michal Hocko

On Mon 12-02-18 16:12:27, Huang, Ying wrote:
> From: Huang Ying <ying.huang@intel.com>
> 
> When page_mapping() is called and the mapping is dereferenced in
> page_evicatable() through shrink_active_list(), it is possible for the
> inode to be truncated and the embedded address space to be freed at
> the same time.  This may lead to the following race.
> 
> CPU1                                                CPU2
> 
> truncate(inode)                                     shrink_active_list()
>   ...                                                 page_evictable(page)
>   truncate_inode_page(mapping, page);
>     delete_from_page_cache(page)
>       spin_lock_irqsave(&mapping->tree_lock, flags);
>         __delete_from_page_cache(page, NULL)
>           page_cache_tree_delete(..)
>             ...                                         mapping = page_mapping(page);
>             page->mapping = NULL;
>             ...
>       spin_unlock_irqrestore(&mapping->tree_lock, flags);
>       page_cache_free_page(mapping, page)
>         put_page(page)
>           if (put_page_testzero(page)) -> false
> - inode now has no pages and can be freed including embedded address_space
> 
>                                                         mapping_unevictable(mapping)
> 							  test_bit(AS_UNEVICTABLE, &mapping->flags);
> - we've dereferenced mapping which is potentially already free.
> 
> Similar race exists between swap cache freeing and page_evicatable() too.
> 
> The address_space in inode and swap cache will be freed after a RCU
> grace period.  So the races are fixed via enclosing the page_mapping()
> and address_space usage in rcu_read_lock/unlock().  Some comments are
> added in code to make it clear what is protected by the RCU read lock.
> 
> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Minchan Kim <minchan@kernel.org>
> Cc: "Huang, Ying" <ying.huang@intel.com>
> Cc: Jan Kara <jack@suse.cz>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Michal Hocko <mhocko@suse.com>

The race looks real (although very unlikely) and the patch looks good to me.
You can add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza

> ---
>  mm/vmscan.c | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index d1c1e00b08bb..10a0f32a3f90 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -3886,7 +3886,13 @@ int node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned int order)
>   */
>  int page_evictable(struct page *page)
>  {
> -	return !mapping_unevictable(page_mapping(page)) && !PageMlocked(page);
> +	int ret;
> +
> +	/* Prevent address_space of inode and swap cache from being freed */
> +	rcu_read_lock();
> +	ret = !mapping_unevictable(page_mapping(page)) && !PageMlocked(page);
> +	rcu_read_unlock();
> +	return ret;
>  }
>  
>  #ifdef CONFIG_SHMEM
> -- 
> 2.15.1
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] mm: Fix races between address_space dereference and free in page_evicatable
  2018-02-12  8:12 ` Huang, Ying
@ 2018-02-18  9:22   ` Minchan Kim
  -1 siblings, 0 replies; 15+ messages in thread
From: Minchan Kim @ 2018-02-18  9:22 UTC (permalink / raw)
  To: Huang, Ying
  Cc: Andrew Morton, linux-mm, linux-kernel, Mel Gorman, Jan Kara,
	Johannes Weiner, Michal Hocko, linux-fsdevel

Hi Huang,

On Mon, Feb 12, 2018 at 04:12:27PM +0800, Huang, Ying wrote:
> From: Huang Ying <ying.huang@intel.com>
> 
> When page_mapping() is called and the mapping is dereferenced in
> page_evicatable() through shrink_active_list(), it is possible for the
> inode to be truncated and the embedded address space to be freed at
> the same time.  This may lead to the following race.
> 
> CPU1                                                CPU2
> 
> truncate(inode)                                     shrink_active_list()
>   ...                                                 page_evictable(page)
>   truncate_inode_page(mapping, page);
>     delete_from_page_cache(page)
>       spin_lock_irqsave(&mapping->tree_lock, flags);
>         __delete_from_page_cache(page, NULL)
>           page_cache_tree_delete(..)
>             ...                                         mapping = page_mapping(page);
>             page->mapping = NULL;
>             ...
>       spin_unlock_irqrestore(&mapping->tree_lock, flags);
>       page_cache_free_page(mapping, page)
>         put_page(page)
>           if (put_page_testzero(page)) -> false
> - inode now has no pages and can be freed including embedded address_space
> 
>                                                         mapping_unevictable(mapping)
> 							  test_bit(AS_UNEVICTABLE, &mapping->flags);
> - we've dereferenced mapping which is potentially already free.
> 
> Similar race exists between swap cache freeing and page_evicatable() too.
> 
> The address_space in inode and swap cache will be freed after a RCU
> grace period.  So the races are fixed via enclosing the page_mapping()
> and address_space usage in rcu_read_lock/unlock().  Some comments are
> added in code to make it clear what is protected by the RCU read lock.

Is it always true for every FSes, even upcoming FSes?
IOW, do we have any strict rule FS folks must use RCU(i.e., call_rcu)
to destroy inode?

Let's cc linux-fs.

> 
> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Minchan Kim <minchan@kernel.org>
> Cc: "Huang, Ying" <ying.huang@intel.com>
> Cc: Jan Kara <jack@suse.cz>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Michal Hocko <mhocko@suse.com>
> ---
>  mm/vmscan.c | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index d1c1e00b08bb..10a0f32a3f90 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -3886,7 +3886,13 @@ int node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned int order)
>   */
>  int page_evictable(struct page *page)
>  {
> -	return !mapping_unevictable(page_mapping(page)) && !PageMlocked(page);
> +	int ret;
> +
> +	/* Prevent address_space of inode and swap cache from being freed */
> +	rcu_read_lock();
> +	ret = !mapping_unevictable(page_mapping(page)) && !PageMlocked(page);
> +	rcu_read_unlock();
> +	return ret;
>  }
>  
>  #ifdef CONFIG_SHMEM
> -- 
> 2.15.1
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] mm: Fix races between address_space dereference and free in page_evicatable
@ 2018-02-18  9:22   ` Minchan Kim
  0 siblings, 0 replies; 15+ messages in thread
From: Minchan Kim @ 2018-02-18  9:22 UTC (permalink / raw)
  To: Huang, Ying
  Cc: Andrew Morton, linux-mm, linux-kernel, Mel Gorman, Jan Kara,
	Johannes Weiner, Michal Hocko, linux-fsdevel

Hi Huang,

On Mon, Feb 12, 2018 at 04:12:27PM +0800, Huang, Ying wrote:
> From: Huang Ying <ying.huang@intel.com>
> 
> When page_mapping() is called and the mapping is dereferenced in
> page_evicatable() through shrink_active_list(), it is possible for the
> inode to be truncated and the embedded address space to be freed at
> the same time.  This may lead to the following race.
> 
> CPU1                                                CPU2
> 
> truncate(inode)                                     shrink_active_list()
>   ...                                                 page_evictable(page)
>   truncate_inode_page(mapping, page);
>     delete_from_page_cache(page)
>       spin_lock_irqsave(&mapping->tree_lock, flags);
>         __delete_from_page_cache(page, NULL)
>           page_cache_tree_delete(..)
>             ...                                         mapping = page_mapping(page);
>             page->mapping = NULL;
>             ...
>       spin_unlock_irqrestore(&mapping->tree_lock, flags);
>       page_cache_free_page(mapping, page)
>         put_page(page)
>           if (put_page_testzero(page)) -> false
> - inode now has no pages and can be freed including embedded address_space
> 
>                                                         mapping_unevictable(mapping)
> 							  test_bit(AS_UNEVICTABLE, &mapping->flags);
> - we've dereferenced mapping which is potentially already free.
> 
> Similar race exists between swap cache freeing and page_evicatable() too.
> 
> The address_space in inode and swap cache will be freed after a RCU
> grace period.  So the races are fixed via enclosing the page_mapping()
> and address_space usage in rcu_read_lock/unlock().  Some comments are
> added in code to make it clear what is protected by the RCU read lock.

Is it always true for every FSes, even upcoming FSes?
IOW, do we have any strict rule FS folks must use RCU(i.e., call_rcu)
to destroy inode?

Let's cc linux-fs.

> 
> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Minchan Kim <minchan@kernel.org>
> Cc: "Huang, Ying" <ying.huang@intel.com>
> Cc: Jan Kara <jack@suse.cz>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Michal Hocko <mhocko@suse.com>
> ---
>  mm/vmscan.c | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index d1c1e00b08bb..10a0f32a3f90 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -3886,7 +3886,13 @@ int node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned int order)
>   */
>  int page_evictable(struct page *page)
>  {
> -	return !mapping_unevictable(page_mapping(page)) && !PageMlocked(page);
> +	int ret;
> +
> +	/* Prevent address_space of inode and swap cache from being freed */
> +	rcu_read_lock();
> +	ret = !mapping_unevictable(page_mapping(page)) && !PageMlocked(page);
> +	rcu_read_unlock();
> +	return ret;
>  }
>  
>  #ifdef CONFIG_SHMEM
> -- 
> 2.15.1
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] mm: Fix races between address_space dereference and free in page_evicatable
  2018-02-18  9:22   ` Minchan Kim
@ 2018-02-19 10:57     ` Jan Kara
  -1 siblings, 0 replies; 15+ messages in thread
From: Jan Kara @ 2018-02-19 10:57 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Huang, Ying, Andrew Morton, linux-mm, linux-kernel, Mel Gorman,
	Jan Kara, Johannes Weiner, Michal Hocko, linux-fsdevel, Al Viro

Hi Minchan,

On Sun 18-02-18 18:22:45, Minchan Kim wrote:
> On Mon, Feb 12, 2018 at 04:12:27PM +0800, Huang, Ying wrote:
> > From: Huang Ying <ying.huang@intel.com>
> > 
> > When page_mapping() is called and the mapping is dereferenced in
> > page_evicatable() through shrink_active_list(), it is possible for the
> > inode to be truncated and the embedded address space to be freed at
> > the same time.  This may lead to the following race.
> > 
> > CPU1                                                CPU2
> > 
> > truncate(inode)                                     shrink_active_list()
> >   ...                                                 page_evictable(page)
> >   truncate_inode_page(mapping, page);
> >     delete_from_page_cache(page)
> >       spin_lock_irqsave(&mapping->tree_lock, flags);
> >         __delete_from_page_cache(page, NULL)
> >           page_cache_tree_delete(..)
> >             ...                                         mapping = page_mapping(page);
> >             page->mapping = NULL;
> >             ...
> >       spin_unlock_irqrestore(&mapping->tree_lock, flags);
> >       page_cache_free_page(mapping, page)
> >         put_page(page)
> >           if (put_page_testzero(page)) -> false
> > - inode now has no pages and can be freed including embedded address_space
> > 
> >                                                         mapping_unevictable(mapping)
> > 							  test_bit(AS_UNEVICTABLE, &mapping->flags);
> > - we've dereferenced mapping which is potentially already free.
> > 
> > Similar race exists between swap cache freeing and page_evicatable() too.
> > 
> > The address_space in inode and swap cache will be freed after a RCU
> > grace period.  So the races are fixed via enclosing the page_mapping()
> > and address_space usage in rcu_read_lock/unlock().  Some comments are
> > added in code to make it clear what is protected by the RCU read lock.
> 
> Is it always true for every FSes, even upcoming FSes?
> IOW, do we have any strict rule FS folks must use RCU(i.e., call_rcu)
> to destroy inode?
> 
> Let's cc linux-fs.

That's actually a good question. Pathname lookup relies on inodes being
protected by RCU so "normal" filesystems definitely need to use RCU freeing
of inodes. OTOH a filesystem could in theory refuse any attempt for RCU
pathname walk (in its .d_revalidate/.d_compare callback) and then get away
with freeing its inodes normally AFAICT. I don't see that happening
anywhere in the tree but in theory it is possible with some effort... But
frankly I don't see a good reason for that so all we should do is to
document that .destroy_inode needs to free the inode structure through RCU
if it uses page cache? Al?

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] mm: Fix races between address_space dereference and free in page_evicatable
@ 2018-02-19 10:57     ` Jan Kara
  0 siblings, 0 replies; 15+ messages in thread
From: Jan Kara @ 2018-02-19 10:57 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Huang, Ying, Andrew Morton, linux-mm, linux-kernel, Mel Gorman,
	Jan Kara, Johannes Weiner, Michal Hocko, linux-fsdevel, Al Viro

Hi Minchan,

On Sun 18-02-18 18:22:45, Minchan Kim wrote:
> On Mon, Feb 12, 2018 at 04:12:27PM +0800, Huang, Ying wrote:
> > From: Huang Ying <ying.huang@intel.com>
> > 
> > When page_mapping() is called and the mapping is dereferenced in
> > page_evicatable() through shrink_active_list(), it is possible for the
> > inode to be truncated and the embedded address space to be freed at
> > the same time.  This may lead to the following race.
> > 
> > CPU1                                                CPU2
> > 
> > truncate(inode)                                     shrink_active_list()
> >   ...                                                 page_evictable(page)
> >   truncate_inode_page(mapping, page);
> >     delete_from_page_cache(page)
> >       spin_lock_irqsave(&mapping->tree_lock, flags);
> >         __delete_from_page_cache(page, NULL)
> >           page_cache_tree_delete(..)
> >             ...                                         mapping = page_mapping(page);
> >             page->mapping = NULL;
> >             ...
> >       spin_unlock_irqrestore(&mapping->tree_lock, flags);
> >       page_cache_free_page(mapping, page)
> >         put_page(page)
> >           if (put_page_testzero(page)) -> false
> > - inode now has no pages and can be freed including embedded address_space
> > 
> >                                                         mapping_unevictable(mapping)
> > 							  test_bit(AS_UNEVICTABLE, &mapping->flags);
> > - we've dereferenced mapping which is potentially already free.
> > 
> > Similar race exists between swap cache freeing and page_evicatable() too.
> > 
> > The address_space in inode and swap cache will be freed after a RCU
> > grace period.  So the races are fixed via enclosing the page_mapping()
> > and address_space usage in rcu_read_lock/unlock().  Some comments are
> > added in code to make it clear what is protected by the RCU read lock.
> 
> Is it always true for every FSes, even upcoming FSes?
> IOW, do we have any strict rule FS folks must use RCU(i.e., call_rcu)
> to destroy inode?
> 
> Let's cc linux-fs.

That's actually a good question. Pathname lookup relies on inodes being
protected by RCU so "normal" filesystems definitely need to use RCU freeing
of inodes. OTOH a filesystem could in theory refuse any attempt for RCU
pathname walk (in its .d_revalidate/.d_compare callback) and then get away
with freeing its inodes normally AFAICT. I don't see that happening
anywhere in the tree but in theory it is possible with some effort... But
frankly I don't see a good reason for that so all we should do is to
document that .destroy_inode needs to free the inode structure through RCU
if it uses page cache? Al?

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] mm: Fix races between address_space dereference and free in page_evicatable
  2018-02-19 10:57     ` Jan Kara
@ 2018-02-26  5:20       ` Minchan Kim
  -1 siblings, 0 replies; 15+ messages in thread
From: Minchan Kim @ 2018-02-26  5:20 UTC (permalink / raw)
  To: Jan Kara
  Cc: Huang, Ying, Andrew Morton, linux-mm, linux-kernel, Mel Gorman,
	Johannes Weiner, Michal Hocko, linux-fsdevel, Al Viro

Hi Jan,

On Mon, Feb 19, 2018 at 11:57:35AM +0100, Jan Kara wrote:
> Hi Minchan,
> 
> On Sun 18-02-18 18:22:45, Minchan Kim wrote:
> > On Mon, Feb 12, 2018 at 04:12:27PM +0800, Huang, Ying wrote:
> > > From: Huang Ying <ying.huang@intel.com>
> > > 
> > > When page_mapping() is called and the mapping is dereferenced in
> > > page_evicatable() through shrink_active_list(), it is possible for the
> > > inode to be truncated and the embedded address space to be freed at
> > > the same time.  This may lead to the following race.
> > > 
> > > CPU1                                                CPU2
> > > 
> > > truncate(inode)                                     shrink_active_list()
> > >   ...                                                 page_evictable(page)
> > >   truncate_inode_page(mapping, page);
> > >     delete_from_page_cache(page)
> > >       spin_lock_irqsave(&mapping->tree_lock, flags);
> > >         __delete_from_page_cache(page, NULL)
> > >           page_cache_tree_delete(..)
> > >             ...                                         mapping = page_mapping(page);
> > >             page->mapping = NULL;
> > >             ...
> > >       spin_unlock_irqrestore(&mapping->tree_lock, flags);
> > >       page_cache_free_page(mapping, page)
> > >         put_page(page)
> > >           if (put_page_testzero(page)) -> false
> > > - inode now has no pages and can be freed including embedded address_space
> > > 
> > >                                                         mapping_unevictable(mapping)
> > > 							  test_bit(AS_UNEVICTABLE, &mapping->flags);
> > > - we've dereferenced mapping which is potentially already free.
> > > 
> > > Similar race exists between swap cache freeing and page_evicatable() too.
> > > 
> > > The address_space in inode and swap cache will be freed after a RCU
> > > grace period.  So the races are fixed via enclosing the page_mapping()
> > > and address_space usage in rcu_read_lock/unlock().  Some comments are
> > > added in code to make it clear what is protected by the RCU read lock.
> > 
> > Is it always true for every FSes, even upcoming FSes?
> > IOW, do we have any strict rule FS folks must use RCU(i.e., call_rcu)
> > to destroy inode?
> > 
> > Let's cc linux-fs.
> 
> That's actually a good question. Pathname lookup relies on inodes being
> protected by RCU so "normal" filesystems definitely need to use RCU freeing
> of inodes. OTOH a filesystem could in theory refuse any attempt for RCU
> pathname walk (in its .d_revalidate/.d_compare callback) and then get away
> with freeing its inodes normally AFAICT. I don't see that happening
> anywhere in the tree but in theory it is possible with some effort... But
> frankly I don't see a good reason for that so all we should do is to
> document that .destroy_inode needs to free the inode structure through RCU
> if it uses page cache? Al?

Yub, it would be much better. However, how does this patch fix the problem?
Although it can make only page_evictable safe, we could go with the page
further and finally uses page->mapping, again.
For instance,

shrink_active_list
	page_evictable();
	..
	page_referened()
		page_rmapping
			page->mapping

I think caller should lock the page to protect entire operation, which
have been used more widely to pin a address_space.

Thanks.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] mm: Fix races between address_space dereference and free in page_evicatable
@ 2018-02-26  5:20       ` Minchan Kim
  0 siblings, 0 replies; 15+ messages in thread
From: Minchan Kim @ 2018-02-26  5:20 UTC (permalink / raw)
  To: Jan Kara
  Cc: Huang, Ying, Andrew Morton, linux-mm, linux-kernel, Mel Gorman,
	Johannes Weiner, Michal Hocko, linux-fsdevel, Al Viro

Hi Jan,

On Mon, Feb 19, 2018 at 11:57:35AM +0100, Jan Kara wrote:
> Hi Minchan,
> 
> On Sun 18-02-18 18:22:45, Minchan Kim wrote:
> > On Mon, Feb 12, 2018 at 04:12:27PM +0800, Huang, Ying wrote:
> > > From: Huang Ying <ying.huang@intel.com>
> > > 
> > > When page_mapping() is called and the mapping is dereferenced in
> > > page_evicatable() through shrink_active_list(), it is possible for the
> > > inode to be truncated and the embedded address space to be freed at
> > > the same time.  This may lead to the following race.
> > > 
> > > CPU1                                                CPU2
> > > 
> > > truncate(inode)                                     shrink_active_list()
> > >   ...                                                 page_evictable(page)
> > >   truncate_inode_page(mapping, page);
> > >     delete_from_page_cache(page)
> > >       spin_lock_irqsave(&mapping->tree_lock, flags);
> > >         __delete_from_page_cache(page, NULL)
> > >           page_cache_tree_delete(..)
> > >             ...                                         mapping = page_mapping(page);
> > >             page->mapping = NULL;
> > >             ...
> > >       spin_unlock_irqrestore(&mapping->tree_lock, flags);
> > >       page_cache_free_page(mapping, page)
> > >         put_page(page)
> > >           if (put_page_testzero(page)) -> false
> > > - inode now has no pages and can be freed including embedded address_space
> > > 
> > >                                                         mapping_unevictable(mapping)
> > > 							  test_bit(AS_UNEVICTABLE, &mapping->flags);
> > > - we've dereferenced mapping which is potentially already free.
> > > 
> > > Similar race exists between swap cache freeing and page_evicatable() too.
> > > 
> > > The address_space in inode and swap cache will be freed after a RCU
> > > grace period.  So the races are fixed via enclosing the page_mapping()
> > > and address_space usage in rcu_read_lock/unlock().  Some comments are
> > > added in code to make it clear what is protected by the RCU read lock.
> > 
> > Is it always true for every FSes, even upcoming FSes?
> > IOW, do we have any strict rule FS folks must use RCU(i.e., call_rcu)
> > to destroy inode?
> > 
> > Let's cc linux-fs.
> 
> That's actually a good question. Pathname lookup relies on inodes being
> protected by RCU so "normal" filesystems definitely need to use RCU freeing
> of inodes. OTOH a filesystem could in theory refuse any attempt for RCU
> pathname walk (in its .d_revalidate/.d_compare callback) and then get away
> with freeing its inodes normally AFAICT. I don't see that happening
> anywhere in the tree but in theory it is possible with some effort... But
> frankly I don't see a good reason for that so all we should do is to
> document that .destroy_inode needs to free the inode structure through RCU
> if it uses page cache? Al?

Yub, it would be much better. However, how does this patch fix the problem?
Although it can make only page_evictable safe, we could go with the page
further and finally uses page->mapping, again.
For instance,

shrink_active_list
	page_evictable();
	..
	page_referened()
		page_rmapping
			page->mapping

I think caller should lock the page to protect entire operation, which
have been used more widely to pin a address_space.

Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] mm: Fix races between address_space dereference and free in page_evicatable
  2018-02-26  5:20       ` Minchan Kim
  (?)
@ 2018-02-26  6:38         ` Huang, Ying
  -1 siblings, 0 replies; 15+ messages in thread
From: Huang, Ying @ 2018-02-26  6:38 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Jan Kara, Andrew Morton, linux-mm, linux-kernel, Mel Gorman,
	Johannes Weiner, Michal Hocko, linux-fsdevel, Al Viro

Minchan Kim <minchan@kernel.org> writes:

> Hi Jan,
>
> On Mon, Feb 19, 2018 at 11:57:35AM +0100, Jan Kara wrote:
>> Hi Minchan,
>> 
>> On Sun 18-02-18 18:22:45, Minchan Kim wrote:
>> > On Mon, Feb 12, 2018 at 04:12:27PM +0800, Huang, Ying wrote:
>> > > From: Huang Ying <ying.huang@intel.com>
>> > > 
>> > > When page_mapping() is called and the mapping is dereferenced in
>> > > page_evicatable() through shrink_active_list(), it is possible for the
>> > > inode to be truncated and the embedded address space to be freed at
>> > > the same time.  This may lead to the following race.
>> > > 
>> > > CPU1                                                CPU2
>> > > 
>> > > truncate(inode)                                     shrink_active_list()
>> > >   ...                                                 page_evictable(page)
>> > >   truncate_inode_page(mapping, page);
>> > >     delete_from_page_cache(page)
>> > >       spin_lock_irqsave(&mapping->tree_lock, flags);
>> > >         __delete_from_page_cache(page, NULL)
>> > >           page_cache_tree_delete(..)
>> > >             ...                                         mapping = page_mapping(page);
>> > >             page->mapping = NULL;
>> > >             ...
>> > >       spin_unlock_irqrestore(&mapping->tree_lock, flags);
>> > >       page_cache_free_page(mapping, page)
>> > >         put_page(page)
>> > >           if (put_page_testzero(page)) -> false
>> > > - inode now has no pages and can be freed including embedded address_space
>> > > 
>> > >                                                         mapping_unevictable(mapping)
>> > > 							  test_bit(AS_UNEVICTABLE, &mapping->flags);
>> > > - we've dereferenced mapping which is potentially already free.
>> > > 
>> > > Similar race exists between swap cache freeing and page_evicatable() too.
>> > > 
>> > > The address_space in inode and swap cache will be freed after a RCU
>> > > grace period.  So the races are fixed via enclosing the page_mapping()
>> > > and address_space usage in rcu_read_lock/unlock().  Some comments are
>> > > added in code to make it clear what is protected by the RCU read lock.
>> > 
>> > Is it always true for every FSes, even upcoming FSes?
>> > IOW, do we have any strict rule FS folks must use RCU(i.e., call_rcu)
>> > to destroy inode?
>> > 
>> > Let's cc linux-fs.
>> 
>> That's actually a good question. Pathname lookup relies on inodes being
>> protected by RCU so "normal" filesystems definitely need to use RCU freeing
>> of inodes. OTOH a filesystem could in theory refuse any attempt for RCU
>> pathname walk (in its .d_revalidate/.d_compare callback) and then get away
>> with freeing its inodes normally AFAICT. I don't see that happening
>> anywhere in the tree but in theory it is possible with some effort... But
>> frankly I don't see a good reason for that so all we should do is to
>> document that .destroy_inode needs to free the inode structure through RCU
>> if it uses page cache? Al?
>
> Yub, it would be much better. However, how does this patch fix the problem?
> Although it can make only page_evictable safe, we could go with the page
> further and finally uses page->mapping, again.
> For instance,
>
> shrink_active_list
> 	page_evictable();
> 	..
> 	page_referened()
> 		page_rmapping
> 			page->mapping

This only checks the value of page->mapping, not deference
page->mapping.  So it should be safe.

Best Regards,
Huang, Ying

> I think caller should lock the page to protect entire operation, which
> have been used more widely to pin a address_space.
>
> Thanks.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] mm: Fix races between address_space dereference and free in page_evicatable
@ 2018-02-26  6:38         ` Huang, Ying
  0 siblings, 0 replies; 15+ messages in thread
From: Huang, Ying @ 2018-02-26  6:38 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Jan Kara, Andrew Morton, linux-mm, linux-kernel, Mel Gorman,
	Johannes Weiner, Michal Hocko, linux-fsdevel, Al Viro

Minchan Kim <minchan@kernel.org> writes:

> Hi Jan,
>
> On Mon, Feb 19, 2018 at 11:57:35AM +0100, Jan Kara wrote:
>> Hi Minchan,
>> 
>> On Sun 18-02-18 18:22:45, Minchan Kim wrote:
>> > On Mon, Feb 12, 2018 at 04:12:27PM +0800, Huang, Ying wrote:
>> > > From: Huang Ying <ying.huang@intel.com>
>> > > 
>> > > When page_mapping() is called and the mapping is dereferenced in
>> > > page_evicatable() through shrink_active_list(), it is possible for the
>> > > inode to be truncated and the embedded address space to be freed at
>> > > the same time.  This may lead to the following race.
>> > > 
>> > > CPU1                                                CPU2
>> > > 
>> > > truncate(inode)                                     shrink_active_list()
>> > >   ...                                                 page_evictable(page)
>> > >   truncate_inode_page(mapping, page);
>> > >     delete_from_page_cache(page)
>> > >       spin_lock_irqsave(&mapping->tree_lock, flags);
>> > >         __delete_from_page_cache(page, NULL)
>> > >           page_cache_tree_delete(..)
>> > >             ...                                         mapping = page_mapping(page);
>> > >             page->mapping = NULL;
>> > >             ...
>> > >       spin_unlock_irqrestore(&mapping->tree_lock, flags);
>> > >       page_cache_free_page(mapping, page)
>> > >         put_page(page)
>> > >           if (put_page_testzero(page)) -> false
>> > > - inode now has no pages and can be freed including embedded address_space
>> > > 
>> > >                                                         mapping_unevictable(mapping)
>> > > 							  test_bit(AS_UNEVICTABLE, &mapping->flags);
>> > > - we've dereferenced mapping which is potentially already free.
>> > > 
>> > > Similar race exists between swap cache freeing and page_evicatable() too.
>> > > 
>> > > The address_space in inode and swap cache will be freed after a RCU
>> > > grace period.  So the races are fixed via enclosing the page_mapping()
>> > > and address_space usage in rcu_read_lock/unlock().  Some comments are
>> > > added in code to make it clear what is protected by the RCU read lock.
>> > 
>> > Is it always true for every FSes, even upcoming FSes?
>> > IOW, do we have any strict rule FS folks must use RCU(i.e., call_rcu)
>> > to destroy inode?
>> > 
>> > Let's cc linux-fs.
>> 
>> That's actually a good question. Pathname lookup relies on inodes being
>> protected by RCU so "normal" filesystems definitely need to use RCU freeing
>> of inodes. OTOH a filesystem could in theory refuse any attempt for RCU
>> pathname walk (in its .d_revalidate/.d_compare callback) and then get away
>> with freeing its inodes normally AFAICT. I don't see that happening
>> anywhere in the tree but in theory it is possible with some effort... But
>> frankly I don't see a good reason for that so all we should do is to
>> document that .destroy_inode needs to free the inode structure through RCU
>> if it uses page cache? Al?
>
> Yub, it would be much better. However, how does this patch fix the problem?
> Although it can make only page_evictable safe, we could go with the page
> further and finally uses page->mapping, again.
> For instance,
>
> shrink_active_list
> 	page_evictable();
> 	..
> 	page_referened()
> 		page_rmapping
> 			page->mapping

This only checks the value of page->mapping, not deference
page->mapping.  So it should be safe.

Best Regards,
Huang, Ying

> I think caller should lock the page to protect entire operation, which
> have been used more widely to pin a address_space.
>
> Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] mm: Fix races between address_space dereference and free in page_evicatable
@ 2018-02-26  6:38         ` Huang, Ying
  0 siblings, 0 replies; 15+ messages in thread
From: Huang, Ying @ 2018-02-26  6:38 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Jan Kara, Andrew Morton, linux-mm, linux-kernel, Mel Gorman,
	Johannes Weiner, Michal Hocko, linux-fsdevel, Al Viro

Minchan Kim <minchan@kernel.org> writes:

> Hi Jan,
>
> On Mon, Feb 19, 2018 at 11:57:35AM +0100, Jan Kara wrote:
>> Hi Minchan,
>> 
>> On Sun 18-02-18 18:22:45, Minchan Kim wrote:
>> > On Mon, Feb 12, 2018 at 04:12:27PM +0800, Huang, Ying wrote:
>> > > From: Huang Ying <ying.huang@intel.com>
>> > > 
>> > > When page_mapping() is called and the mapping is dereferenced in
>> > > page_evicatable() through shrink_active_list(), it is possible for the
>> > > inode to be truncated and the embedded address space to be freed at
>> > > the same time.  This may lead to the following race.
>> > > 
>> > > CPU1                                                CPU2
>> > > 
>> > > truncate(inode)                                     shrink_active_list()
>> > >   ...                                                 page_evictable(page)
>> > >   truncate_inode_page(mapping, page);
>> > >     delete_from_page_cache(page)
>> > >       spin_lock_irqsave(&mapping->tree_lock, flags);
>> > >         __delete_from_page_cache(page, NULL)
>> > >           page_cache_tree_delete(..)
>> > >             ...                                         mapping = page_mapping(page);
>> > >             page->mapping = NULL;
>> > >             ...
>> > >       spin_unlock_irqrestore(&mapping->tree_lock, flags);
>> > >       page_cache_free_page(mapping, page)
>> > >         put_page(page)
>> > >           if (put_page_testzero(page)) -> false
>> > > - inode now has no pages and can be freed including embedded address_space
>> > > 
>> > >                                                         mapping_unevictable(mapping)
>> > > 							  test_bit(AS_UNEVICTABLE, &mapping->flags);
>> > > - we've dereferenced mapping which is potentially already free.
>> > > 
>> > > Similar race exists between swap cache freeing and page_evicatable() too.
>> > > 
>> > > The address_space in inode and swap cache will be freed after a RCU
>> > > grace period.  So the races are fixed via enclosing the page_mapping()
>> > > and address_space usage in rcu_read_lock/unlock().  Some comments are
>> > > added in code to make it clear what is protected by the RCU read lock.
>> > 
>> > Is it always true for every FSes, even upcoming FSes?
>> > IOW, do we have any strict rule FS folks must use RCU(i.e., call_rcu)
>> > to destroy inode?
>> > 
>> > Let's cc linux-fs.
>> 
>> That's actually a good question. Pathname lookup relies on inodes being
>> protected by RCU so "normal" filesystems definitely need to use RCU freeing
>> of inodes. OTOH a filesystem could in theory refuse any attempt for RCU
>> pathname walk (in its .d_revalidate/.d_compare callback) and then get away
>> with freeing its inodes normally AFAICT. I don't see that happening
>> anywhere in the tree but in theory it is possible with some effort... But
>> frankly I don't see a good reason for that so all we should do is to
>> document that .destroy_inode needs to free the inode structure through RCU
>> if it uses page cache? Al?
>
> Yub, it would be much better. However, how does this patch fix the problem?
> Although it can make only page_evictable safe, we could go with the page
> further and finally uses page->mapping, again.
> For instance,
>
> shrink_active_list
> 	page_evictable();
> 	..
> 	page_referened()
> 		page_rmapping
> 			page->mapping

This only checks the value of page->mapping, not deference
page->mapping.  So it should be safe.

Best Regards,
Huang, Ying

> I think caller should lock the page to protect entire operation, which
> have been used more widely to pin a address_space.
>
> Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] mm: Fix races between address_space dereference and free in page_evicatable
  2018-02-26  6:38         ` Huang, Ying
@ 2018-02-26  7:36           ` Minchan Kim
  -1 siblings, 0 replies; 15+ messages in thread
From: Minchan Kim @ 2018-02-26  7:36 UTC (permalink / raw)
  To: Huang, Ying
  Cc: Jan Kara, Andrew Morton, linux-mm, linux-kernel, Mel Gorman,
	Johannes Weiner, Michal Hocko, linux-fsdevel, Al Viro

On Mon, Feb 26, 2018 at 02:38:04PM +0800, Huang, Ying wrote:
> Minchan Kim <minchan@kernel.org> writes:
> 
> > Hi Jan,
> >
> > On Mon, Feb 19, 2018 at 11:57:35AM +0100, Jan Kara wrote:
> >> Hi Minchan,
> >> 
> >> On Sun 18-02-18 18:22:45, Minchan Kim wrote:
> >> > On Mon, Feb 12, 2018 at 04:12:27PM +0800, Huang, Ying wrote:
> >> > > From: Huang Ying <ying.huang@intel.com>
> >> > > 
> >> > > When page_mapping() is called and the mapping is dereferenced in
> >> > > page_evicatable() through shrink_active_list(), it is possible for the
> >> > > inode to be truncated and the embedded address space to be freed at
> >> > > the same time.  This may lead to the following race.
> >> > > 
> >> > > CPU1                                                CPU2
> >> > > 
> >> > > truncate(inode)                                     shrink_active_list()
> >> > >   ...                                                 page_evictable(page)
> >> > >   truncate_inode_page(mapping, page);
> >> > >     delete_from_page_cache(page)
> >> > >       spin_lock_irqsave(&mapping->tree_lock, flags);
> >> > >         __delete_from_page_cache(page, NULL)
> >> > >           page_cache_tree_delete(..)
> >> > >             ...                                         mapping = page_mapping(page);
> >> > >             page->mapping = NULL;
> >> > >             ...
> >> > >       spin_unlock_irqrestore(&mapping->tree_lock, flags);
> >> > >       page_cache_free_page(mapping, page)
> >> > >         put_page(page)
> >> > >           if (put_page_testzero(page)) -> false
> >> > > - inode now has no pages and can be freed including embedded address_space
> >> > > 
> >> > >                                                         mapping_unevictable(mapping)
> >> > > 							  test_bit(AS_UNEVICTABLE, &mapping->flags);
> >> > > - we've dereferenced mapping which is potentially already free.
> >> > > 
> >> > > Similar race exists between swap cache freeing and page_evicatable() too.
> >> > > 
> >> > > The address_space in inode and swap cache will be freed after a RCU
> >> > > grace period.  So the races are fixed via enclosing the page_mapping()
> >> > > and address_space usage in rcu_read_lock/unlock().  Some comments are
> >> > > added in code to make it clear what is protected by the RCU read lock.
> >> > 
> >> > Is it always true for every FSes, even upcoming FSes?
> >> > IOW, do we have any strict rule FS folks must use RCU(i.e., call_rcu)
> >> > to destroy inode?
> >> > 
> >> > Let's cc linux-fs.
> >> 
> >> That's actually a good question. Pathname lookup relies on inodes being
> >> protected by RCU so "normal" filesystems definitely need to use RCU freeing
> >> of inodes. OTOH a filesystem could in theory refuse any attempt for RCU
> >> pathname walk (in its .d_revalidate/.d_compare callback) and then get away
> >> with freeing its inodes normally AFAICT. I don't see that happening
> >> anywhere in the tree but in theory it is possible with some effort... But
> >> frankly I don't see a good reason for that so all we should do is to
> >> document that .destroy_inode needs to free the inode structure through RCU
> >> if it uses page cache? Al?
> >
> > Yub, it would be much better. However, how does this patch fix the problem?
> > Although it can make only page_evictable safe, we could go with the page
> > further and finally uses page->mapping, again.
> > For instance,
> >
> > shrink_active_list
> > 	page_evictable();
> > 	..
> > 	page_referened()
> > 		page_rmapping
> > 			page->mapping
> 
> This only checks the value of page->mapping, not deference
> page->mapping.  So it should be safe.

Oops, you're right. I got confused. However, I want to make the lock
consistent(i.e., use page_lock to protect address_space) but cannot
come with better way.

Sorry for the noise, Huang.

> 
> Best Regards,
> Huang, Ying
> 
> > I think caller should lock the page to protect entire operation, which
> > have been used more widely to pin a address_space.
> >
> > Thanks.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] mm: Fix races between address_space dereference and free in page_evicatable
@ 2018-02-26  7:36           ` Minchan Kim
  0 siblings, 0 replies; 15+ messages in thread
From: Minchan Kim @ 2018-02-26  7:36 UTC (permalink / raw)
  To: Huang, Ying
  Cc: Jan Kara, Andrew Morton, linux-mm, linux-kernel, Mel Gorman,
	Johannes Weiner, Michal Hocko, linux-fsdevel, Al Viro

On Mon, Feb 26, 2018 at 02:38:04PM +0800, Huang, Ying wrote:
> Minchan Kim <minchan@kernel.org> writes:
> 
> > Hi Jan,
> >
> > On Mon, Feb 19, 2018 at 11:57:35AM +0100, Jan Kara wrote:
> >> Hi Minchan,
> >> 
> >> On Sun 18-02-18 18:22:45, Minchan Kim wrote:
> >> > On Mon, Feb 12, 2018 at 04:12:27PM +0800, Huang, Ying wrote:
> >> > > From: Huang Ying <ying.huang@intel.com>
> >> > > 
> >> > > When page_mapping() is called and the mapping is dereferenced in
> >> > > page_evicatable() through shrink_active_list(), it is possible for the
> >> > > inode to be truncated and the embedded address space to be freed at
> >> > > the same time.  This may lead to the following race.
> >> > > 
> >> > > CPU1                                                CPU2
> >> > > 
> >> > > truncate(inode)                                     shrink_active_list()
> >> > >   ...                                                 page_evictable(page)
> >> > >   truncate_inode_page(mapping, page);
> >> > >     delete_from_page_cache(page)
> >> > >       spin_lock_irqsave(&mapping->tree_lock, flags);
> >> > >         __delete_from_page_cache(page, NULL)
> >> > >           page_cache_tree_delete(..)
> >> > >             ...                                         mapping = page_mapping(page);
> >> > >             page->mapping = NULL;
> >> > >             ...
> >> > >       spin_unlock_irqrestore(&mapping->tree_lock, flags);
> >> > >       page_cache_free_page(mapping, page)
> >> > >         put_page(page)
> >> > >           if (put_page_testzero(page)) -> false
> >> > > - inode now has no pages and can be freed including embedded address_space
> >> > > 
> >> > >                                                         mapping_unevictable(mapping)
> >> > > 							  test_bit(AS_UNEVICTABLE, &mapping->flags);
> >> > > - we've dereferenced mapping which is potentially already free.
> >> > > 
> >> > > Similar race exists between swap cache freeing and page_evicatable() too.
> >> > > 
> >> > > The address_space in inode and swap cache will be freed after a RCU
> >> > > grace period.  So the races are fixed via enclosing the page_mapping()
> >> > > and address_space usage in rcu_read_lock/unlock().  Some comments are
> >> > > added in code to make it clear what is protected by the RCU read lock.
> >> > 
> >> > Is it always true for every FSes, even upcoming FSes?
> >> > IOW, do we have any strict rule FS folks must use RCU(i.e., call_rcu)
> >> > to destroy inode?
> >> > 
> >> > Let's cc linux-fs.
> >> 
> >> That's actually a good question. Pathname lookup relies on inodes being
> >> protected by RCU so "normal" filesystems definitely need to use RCU freeing
> >> of inodes. OTOH a filesystem could in theory refuse any attempt for RCU
> >> pathname walk (in its .d_revalidate/.d_compare callback) and then get away
> >> with freeing its inodes normally AFAICT. I don't see that happening
> >> anywhere in the tree but in theory it is possible with some effort... But
> >> frankly I don't see a good reason for that so all we should do is to
> >> document that .destroy_inode needs to free the inode structure through RCU
> >> if it uses page cache? Al?
> >
> > Yub, it would be much better. However, how does this patch fix the problem?
> > Although it can make only page_evictable safe, we could go with the page
> > further and finally uses page->mapping, again.
> > For instance,
> >
> > shrink_active_list
> > 	page_evictable();
> > 	..
> > 	page_referened()
> > 		page_rmapping
> > 			page->mapping
> 
> This only checks the value of page->mapping, not deference
> page->mapping.  So it should be safe.

Oops, you're right. I got confused. However, I want to make the lock
consistent(i.e., use page_lock to protect address_space) but cannot
come with better way.

Sorry for the noise, Huang.

> 
> Best Regards,
> Huang, Ying
> 
> > I think caller should lock the page to protect entire operation, which
> > have been used more widely to pin a address_space.
> >
> > Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2018-02-26  7:37 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-02-12  8:12 [PATCH] mm: Fix races between address_space dereference and free in page_evicatable Huang, Ying
2018-02-12  8:12 ` Huang, Ying
2018-02-15  9:18 ` Jan Kara
2018-02-15  9:18   ` Jan Kara
2018-02-18  9:22 ` Minchan Kim
2018-02-18  9:22   ` Minchan Kim
2018-02-19 10:57   ` Jan Kara
2018-02-19 10:57     ` Jan Kara
2018-02-26  5:20     ` Minchan Kim
2018-02-26  5:20       ` Minchan Kim
2018-02-26  6:38       ` Huang, Ying
2018-02-26  6:38         ` Huang, Ying
2018-02-26  6:38         ` Huang, Ying
2018-02-26  7:36         ` Minchan Kim
2018-02-26  7:36           ` Minchan Kim

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.