All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] xfs: assure zeroed memory buffers for certain kmem allocations
@ 2019-09-16 15:35 Bill O'Donnell
  2019-09-16 21:24 ` Eric Sandeen
                   ` (4 more replies)
  0 siblings, 5 replies; 17+ messages in thread
From: Bill O'Donnell @ 2019-09-16 15:35 UTC (permalink / raw)
  To: linux-xfs; +Cc: darrick.wong

Guarantee zeroed memory buffers for cases where potential memory
leak to disk can occur. In these cases, kmem_alloc is used and
doesn't zero the buffer, opening the possibility of information
leakage to disk.

Introduce a xfs_buf_flag, _XBF_KMZ, to indicate a request for a zeroed
buffer, and use existing infrastucture (xfs_buf_allocate_memory) to
obtain the already zeroed buffer from kernel memory.

This solution avoids the performance issue that would occur if a
wholesale change to replace kmem_alloc with kmem_zalloc was done.

Signed-off-by: Bill O'Donnell <billodo@redhat.com>
---
 fs/xfs/xfs_buf.c | 8 ++++++--
 fs/xfs/xfs_buf.h | 4 +++-
 2 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
index 120ef99d09e8..916a3f782950 100644
--- a/fs/xfs/xfs_buf.c
+++ b/fs/xfs/xfs_buf.c
@@ -345,16 +345,19 @@ xfs_buf_allocate_memory(
 	unsigned short		page_count, i;
 	xfs_off_t		start, end;
 	int			error;
+	uint			kmflag_mask = 0;
 
 	/*
 	 * for buffers that are contained within a single page, just allocate
 	 * the memory from the heap - there's no need for the complexity of
 	 * page arrays to keep allocation down to order 0.
 	 */
+	if (flags & _XBF_KMZ)
+		kmflag_mask |= KM_ZERO;
 	size = BBTOB(bp->b_length);
 	if (size < PAGE_SIZE) {
 		int align_mask = xfs_buftarg_dma_alignment(bp->b_target);
-		bp->b_addr = kmem_alloc_io(size, align_mask, KM_NOFS);
+		bp->b_addr = kmem_alloc_io(size, align_mask, KM_NOFS | kmflag_mask);
 		if (!bp->b_addr) {
 			/* low memory - use alloc_page loop instead */
 			goto use_alloc_page;
@@ -391,7 +394,7 @@ xfs_buf_allocate_memory(
 		struct page	*page;
 		uint		retries = 0;
 retry:
-		page = alloc_page(gfp_mask);
+		page = alloc_page(gfp_mask | kmflag_mask);
 		if (unlikely(page == NULL)) {
 			if (flags & XBF_READ_AHEAD) {
 				bp->b_page_count = i;
@@ -683,6 +686,7 @@ xfs_buf_get_map(
 	struct xfs_buf		*new_bp;
 	int			error = 0;
 
+	flags |= _XBF_KMZ;
 	error = xfs_buf_find(target, map, nmaps, flags, NULL, &bp);
 
 	switch (error) {
diff --git a/fs/xfs/xfs_buf.h b/fs/xfs/xfs_buf.h
index f6ce17d8d848..416ff588240a 100644
--- a/fs/xfs/xfs_buf.h
+++ b/fs/xfs/xfs_buf.h
@@ -38,6 +38,7 @@
 #define _XBF_PAGES	 (1 << 20)/* backed by refcounted pages */
 #define _XBF_KMEM	 (1 << 21)/* backed by heap memory */
 #define _XBF_DELWRI_Q	 (1 << 22)/* buffer on a delwri queue */
+#define _XBF_KMZ	 (1 << 23)/* zeroed buffer required */
 
 typedef unsigned int xfs_buf_flags_t;
 
@@ -54,7 +55,8 @@ typedef unsigned int xfs_buf_flags_t;
 	{ XBF_UNMAPPED,		"UNMAPPED" },	/* ditto */\
 	{ _XBF_PAGES,		"PAGES" }, \
 	{ _XBF_KMEM,		"KMEM" }, \
-	{ _XBF_DELWRI_Q,	"DELWRI_Q" }
+	{ _XBF_DELWRI_Q,	"DELWRI_Q" }, \
+	{ _XBF_KMZ,             "KMEM_Z" }
 
 
 /*
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH] xfs: assure zeroed memory buffers for certain kmem allocations
  2019-09-16 15:35 [PATCH] xfs: assure zeroed memory buffers for certain kmem allocations Bill O'Donnell
@ 2019-09-16 21:24 ` Eric Sandeen
  2019-09-16 21:30   ` Darrick J. Wong
  2019-09-16 21:32   ` Bill O'Donnell
  2019-09-16 21:54 ` Dave Chinner
                   ` (3 subsequent siblings)
  4 siblings, 2 replies; 17+ messages in thread
From: Eric Sandeen @ 2019-09-16 21:24 UTC (permalink / raw)
  To: Bill O'Donnell, linux-xfs; +Cc: darrick.wong

On 9/16/19 10:35 AM, Bill O'Donnell wrote:
> Guarantee zeroed memory buffers for cases where potential memory
> leak to disk can occur. In these cases, kmem_alloc is used and
> doesn't zero the buffer, opening the possibility of information
> leakage to disk.
> 
> Introduce a xfs_buf_flag, _XBF_KMZ, to indicate a request for a zeroed
> buffer, and use existing infrastucture (xfs_buf_allocate_memory) to
> obtain the already zeroed buffer from kernel memory.
> 
> This solution avoids the performance issue that would occur if a
> wholesale change to replace kmem_alloc with kmem_zalloc was done.
> 
> Signed-off-by: Bill O'Donnell <billodo@redhat.com>

I think this can probably be further optimized by not obtaining zeroed
memory when we're about to fill the buffer from disk as the very
next step.

(in this case, xfs_buf_read_map calls xfs_buf_get_map and then immediately
reads the buffer from disk with _xfs_buf_read)  xfs_buf_read_map adds
XBF_READ to the flags during this process.

So I wonder if this can be simplified/optimized by just checking for XBF_READ
in xfs_buf_allocate_memory's flags, and if it's not set, then request
zeroed memory, because that indicates a buffer we'll be filling in from
memory and subsequently writing to disk.

-Eric

> ---
>  fs/xfs/xfs_buf.c | 8 ++++++--
>  fs/xfs/xfs_buf.h | 4 +++-
>  2 files changed, 9 insertions(+), 3 deletions(-)
> 
> diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
> index 120ef99d09e8..916a3f782950 100644
> --- a/fs/xfs/xfs_buf.c
> +++ b/fs/xfs/xfs_buf.c
> @@ -345,16 +345,19 @@ xfs_buf_allocate_memory(
>  	unsigned short		page_count, i;
>  	xfs_off_t		start, end;
>  	int			error;
> +	uint			kmflag_mask = 0;
>  
>  	/*
>  	 * for buffers that are contained within a single page, just allocate
>  	 * the memory from the heap - there's no need for the complexity of
>  	 * page arrays to keep allocation down to order 0.
>  	 */
> +	if (flags & _XBF_KMZ)
> +		kmflag_mask |= KM_ZERO;
>  	size = BBTOB(bp->b_length);
>  	if (size < PAGE_SIZE) {
>  		int align_mask = xfs_buftarg_dma_alignment(bp->b_target);
> -		bp->b_addr = kmem_alloc_io(size, align_mask, KM_NOFS);
> +		bp->b_addr = kmem_alloc_io(size, align_mask, KM_NOFS | kmflag_mask);
>  		if (!bp->b_addr) {
>  			/* low memory - use alloc_page loop instead */
>  			goto use_alloc_page;
> @@ -391,7 +394,7 @@ xfs_buf_allocate_memory(
>  		struct page	*page;
>  		uint		retries = 0;
>  retry:
> -		page = alloc_page(gfp_mask);
> +		page = alloc_page(gfp_mask | kmflag_mask);
>  		if (unlikely(page == NULL)) {
>  			if (flags & XBF_READ_AHEAD) {
>  				bp->b_page_count = i;
> @@ -683,6 +686,7 @@ xfs_buf_get_map(
>  	struct xfs_buf		*new_bp;
>  	int			error = 0;
>  
> +	flags |= _XBF_KMZ;
>  	error = xfs_buf_find(target, map, nmaps, flags, NULL, &bp);
>  
>  	switch (error) {
> diff --git a/fs/xfs/xfs_buf.h b/fs/xfs/xfs_buf.h
> index f6ce17d8d848..416ff588240a 100644
> --- a/fs/xfs/xfs_buf.h
> +++ b/fs/xfs/xfs_buf.h
> @@ -38,6 +38,7 @@
>  #define _XBF_PAGES	 (1 << 20)/* backed by refcounted pages */
>  #define _XBF_KMEM	 (1 << 21)/* backed by heap memory */
>  #define _XBF_DELWRI_Q	 (1 << 22)/* buffer on a delwri queue */
> +#define _XBF_KMZ	 (1 << 23)/* zeroed buffer required */
>  
>  typedef unsigned int xfs_buf_flags_t;
>  
> @@ -54,7 +55,8 @@ typedef unsigned int xfs_buf_flags_t;
>  	{ XBF_UNMAPPED,		"UNMAPPED" },	/* ditto */\
>  	{ _XBF_PAGES,		"PAGES" }, \
>  	{ _XBF_KMEM,		"KMEM" }, \
> -	{ _XBF_DELWRI_Q,	"DELWRI_Q" }
> +	{ _XBF_DELWRI_Q,	"DELWRI_Q" }, \
> +	{ _XBF_KMZ,             "KMEM_Z" }
>  
>  
>  /*
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] xfs: assure zeroed memory buffers for certain kmem allocations
  2019-09-16 21:24 ` Eric Sandeen
@ 2019-09-16 21:30   ` Darrick J. Wong
  2019-09-16 21:32   ` Bill O'Donnell
  1 sibling, 0 replies; 17+ messages in thread
From: Darrick J. Wong @ 2019-09-16 21:30 UTC (permalink / raw)
  To: Eric Sandeen; +Cc: Bill O'Donnell, linux-xfs

On Mon, Sep 16, 2019 at 04:24:40PM -0500, Eric Sandeen wrote:
> On 9/16/19 10:35 AM, Bill O'Donnell wrote:
> > Guarantee zeroed memory buffers for cases where potential memory
> > leak to disk can occur. In these cases, kmem_alloc is used and
> > doesn't zero the buffer, opening the possibility of information
> > leakage to disk.
> > 
> > Introduce a xfs_buf_flag, _XBF_KMZ, to indicate a request for a zeroed
> > buffer, and use existing infrastucture (xfs_buf_allocate_memory) to
> > obtain the already zeroed buffer from kernel memory.
> > 
> > This solution avoids the performance issue that would occur if a
> > wholesale change to replace kmem_alloc with kmem_zalloc was done.
> > 
> > Signed-off-by: Bill O'Donnell <billodo@redhat.com>
> 
> I think this can probably be further optimized by not obtaining zeroed
> memory when we're about to fill the buffer from disk as the very
> next step.
> 
> (in this case, xfs_buf_read_map calls xfs_buf_get_map and then immediately
> reads the buffer from disk with _xfs_buf_read)  xfs_buf_read_map adds
> XBF_READ to the flags during this process.
> 
> So I wonder if this can be simplified/optimized by just checking for XBF_READ
> in xfs_buf_allocate_memory's flags, and if it's not set, then request
> zeroed memory, because that indicates a buffer we'll be filling in from
> memory and subsequently writing to disk.

I was wondering that ("Why can't we allocate a zeroed buffer only for
the get_buf case so that we don't have to do that for the read_buf
case?") too.  Once you do that then you can then remove all the explicit
memset calls too.

> -Eric
> 
> > ---
> >  fs/xfs/xfs_buf.c | 8 ++++++--
> >  fs/xfs/xfs_buf.h | 4 +++-
> >  2 files changed, 9 insertions(+), 3 deletions(-)
> > 
> > diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
> > index 120ef99d09e8..916a3f782950 100644
> > --- a/fs/xfs/xfs_buf.c
> > +++ b/fs/xfs/xfs_buf.c
> > @@ -345,16 +345,19 @@ xfs_buf_allocate_memory(
> >  	unsigned short		page_count, i;
> >  	xfs_off_t		start, end;
> >  	int			error;
> > +	uint			kmflag_mask = 0;
> >  
> >  	/*
> >  	 * for buffers that are contained within a single page, just allocate
> >  	 * the memory from the heap - there's no need for the complexity of
> >  	 * page arrays to keep allocation down to order 0.
> >  	 */
> > +	if (flags & _XBF_KMZ)
> > +		kmflag_mask |= KM_ZERO;
> >  	size = BBTOB(bp->b_length);
> >  	if (size < PAGE_SIZE) {
> >  		int align_mask = xfs_buftarg_dma_alignment(bp->b_target);
> > -		bp->b_addr = kmem_alloc_io(size, align_mask, KM_NOFS);
> > +		bp->b_addr = kmem_alloc_io(size, align_mask, KM_NOFS | kmflag_mask);

Does this overflow 80 columns?

--D

> >  		if (!bp->b_addr) {
> >  			/* low memory - use alloc_page loop instead */
> >  			goto use_alloc_page;
> > @@ -391,7 +394,7 @@ xfs_buf_allocate_memory(
> >  		struct page	*page;
> >  		uint		retries = 0;
> >  retry:
> > -		page = alloc_page(gfp_mask);
> > +		page = alloc_page(gfp_mask | kmflag_mask);
> >  		if (unlikely(page == NULL)) {
> >  			if (flags & XBF_READ_AHEAD) {
> >  				bp->b_page_count = i;
> > @@ -683,6 +686,7 @@ xfs_buf_get_map(
> >  	struct xfs_buf		*new_bp;
> >  	int			error = 0;
> >  
> > +	flags |= _XBF_KMZ;
> >  	error = xfs_buf_find(target, map, nmaps, flags, NULL, &bp);
> >  
> >  	switch (error) {
> > diff --git a/fs/xfs/xfs_buf.h b/fs/xfs/xfs_buf.h
> > index f6ce17d8d848..416ff588240a 100644
> > --- a/fs/xfs/xfs_buf.h
> > +++ b/fs/xfs/xfs_buf.h
> > @@ -38,6 +38,7 @@
> >  #define _XBF_PAGES	 (1 << 20)/* backed by refcounted pages */
> >  #define _XBF_KMEM	 (1 << 21)/* backed by heap memory */
> >  #define _XBF_DELWRI_Q	 (1 << 22)/* buffer on a delwri queue */
> > +#define _XBF_KMZ	 (1 << 23)/* zeroed buffer required */
> >  
> >  typedef unsigned int xfs_buf_flags_t;
> >  
> > @@ -54,7 +55,8 @@ typedef unsigned int xfs_buf_flags_t;
> >  	{ XBF_UNMAPPED,		"UNMAPPED" },	/* ditto */\
> >  	{ _XBF_PAGES,		"PAGES" }, \
> >  	{ _XBF_KMEM,		"KMEM" }, \
> > -	{ _XBF_DELWRI_Q,	"DELWRI_Q" }
> > +	{ _XBF_DELWRI_Q,	"DELWRI_Q" }, \
> > +	{ _XBF_KMZ,             "KMEM_Z" }
> >  
> >  
> >  /*
> > 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] xfs: assure zeroed memory buffers for certain kmem allocations
  2019-09-16 21:24 ` Eric Sandeen
  2019-09-16 21:30   ` Darrick J. Wong
@ 2019-09-16 21:32   ` Bill O'Donnell
  1 sibling, 0 replies; 17+ messages in thread
From: Bill O'Donnell @ 2019-09-16 21:32 UTC (permalink / raw)
  To: Eric Sandeen; +Cc: linux-xfs, darrick.wong

On Mon, Sep 16, 2019 at 04:24:40PM -0500, Eric Sandeen wrote:
> On 9/16/19 10:35 AM, Bill O'Donnell wrote:
> > Guarantee zeroed memory buffers for cases where potential memory
> > leak to disk can occur. In these cases, kmem_alloc is used and
> > doesn't zero the buffer, opening the possibility of information
> > leakage to disk.
> > 
> > Introduce a xfs_buf_flag, _XBF_KMZ, to indicate a request for a zeroed
> > buffer, and use existing infrastucture (xfs_buf_allocate_memory) to
> > obtain the already zeroed buffer from kernel memory.
> > 
> > This solution avoids the performance issue that would occur if a
> > wholesale change to replace kmem_alloc with kmem_zalloc was done.
> > 
> > Signed-off-by: Bill O'Donnell <billodo@redhat.com>
> 
> I think this can probably be further optimized by not obtaining zeroed
> memory when we're about to fill the buffer from disk as the very
> next step.

Yep. I missed that redundancy.

> 
> (in this case, xfs_buf_read_map calls xfs_buf_get_map and then immediately
> reads the buffer from disk with _xfs_buf_read)  xfs_buf_read_map adds
> XBF_READ to the flags during this process.
> 
> So I wonder if this can be simplified/optimized by just checking for XBF_READ
> in xfs_buf_allocate_memory's flags, and if it's not set, then request
> zeroed memory, because that indicates a buffer we'll be filling in from
> memory and subsequently writing to disk.

nod.

> 
> -Eric
> 
> > ---
> >  fs/xfs/xfs_buf.c | 8 ++++++--
> >  fs/xfs/xfs_buf.h | 4 +++-
> >  2 files changed, 9 insertions(+), 3 deletions(-)
> > 
> > diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
> > index 120ef99d09e8..916a3f782950 100644
> > --- a/fs/xfs/xfs_buf.c
> > +++ b/fs/xfs/xfs_buf.c
> > @@ -345,16 +345,19 @@ xfs_buf_allocate_memory(
> >  	unsigned short		page_count, i;
> >  	xfs_off_t		start, end;
> >  	int			error;
> > +	uint			kmflag_mask = 0;
> >  
> >  	/*
> >  	 * for buffers that are contained within a single page, just allocate
> >  	 * the memory from the heap - there's no need for the complexity of
> >  	 * page arrays to keep allocation down to order 0.
> >  	 */
> > +	if (flags & _XBF_KMZ)
> > +		kmflag_mask |= KM_ZERO;
> >  	size = BBTOB(bp->b_length);
> >  	if (size < PAGE_SIZE) {
> >  		int align_mask = xfs_buftarg_dma_alignment(bp->b_target);
> > -		bp->b_addr = kmem_alloc_io(size, align_mask, KM_NOFS);
> > +		bp->b_addr = kmem_alloc_io(size, align_mask, KM_NOFS | kmflag_mask);
> >  		if (!bp->b_addr) {
> >  			/* low memory - use alloc_page loop instead */
> >  			goto use_alloc_page;
> > @@ -391,7 +394,7 @@ xfs_buf_allocate_memory(
> >  		struct page	*page;
> >  		uint		retries = 0;
> >  retry:
> > -		page = alloc_page(gfp_mask);
> > +		page = alloc_page(gfp_mask | kmflag_mask);
> >  		if (unlikely(page == NULL)) {
> >  			if (flags & XBF_READ_AHEAD) {
> >  				bp->b_page_count = i;
> > @@ -683,6 +686,7 @@ xfs_buf_get_map(
> >  	struct xfs_buf		*new_bp;
> >  	int			error = 0;
> >  
> > +	flags |= _XBF_KMZ;
> >  	error = xfs_buf_find(target, map, nmaps, flags, NULL, &bp);
> >  
> >  	switch (error) {
> > diff --git a/fs/xfs/xfs_buf.h b/fs/xfs/xfs_buf.h
> > index f6ce17d8d848..416ff588240a 100644
> > --- a/fs/xfs/xfs_buf.h
> > +++ b/fs/xfs/xfs_buf.h
> > @@ -38,6 +38,7 @@
> >  #define _XBF_PAGES	 (1 << 20)/* backed by refcounted pages */
> >  #define _XBF_KMEM	 (1 << 21)/* backed by heap memory */
> >  #define _XBF_DELWRI_Q	 (1 << 22)/* buffer on a delwri queue */
> > +#define _XBF_KMZ	 (1 << 23)/* zeroed buffer required */
> >  
> >  typedef unsigned int xfs_buf_flags_t;
> >  
> > @@ -54,7 +55,8 @@ typedef unsigned int xfs_buf_flags_t;
> >  	{ XBF_UNMAPPED,		"UNMAPPED" },	/* ditto */\
> >  	{ _XBF_PAGES,		"PAGES" }, \
> >  	{ _XBF_KMEM,		"KMEM" }, \
> > -	{ _XBF_DELWRI_Q,	"DELWRI_Q" }
> > +	{ _XBF_DELWRI_Q,	"DELWRI_Q" }, \
> > +	{ _XBF_KMZ,             "KMEM_Z" }
> >  
> >  
> >  /*
> > 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] xfs: assure zeroed memory buffers for certain kmem allocations
  2019-09-16 15:35 [PATCH] xfs: assure zeroed memory buffers for certain kmem allocations Bill O'Donnell
  2019-09-16 21:24 ` Eric Sandeen
@ 2019-09-16 21:54 ` Dave Chinner
  2019-09-18 15:47 ` [PATCH v2] " Bill O'Donnell
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 17+ messages in thread
From: Dave Chinner @ 2019-09-16 21:54 UTC (permalink / raw)
  To: Bill O'Donnell; +Cc: linux-xfs, darrick.wong

On Mon, Sep 16, 2019 at 10:35:04AM -0500, Bill O'Donnell wrote:
> Guarantee zeroed memory buffers for cases where potential memory
> leak to disk can occur. In these cases, kmem_alloc is used and
> doesn't zero the buffer, opening the possibility of information
> leakage to disk.
> 
> Introduce a xfs_buf_flag, _XBF_KMZ, to indicate a request for a zeroed
> buffer, and use existing infrastucture (xfs_buf_allocate_memory) to
> obtain the already zeroed buffer from kernel memory.
> 
> This solution avoids the performance issue that would occur if a
> wholesale change to replace kmem_alloc with kmem_zalloc was done.
> 
> Signed-off-by: Bill O'Donnell <billodo@redhat.com>
> ---
>  fs/xfs/xfs_buf.c | 8 ++++++--
>  fs/xfs/xfs_buf.h | 4 +++-
>  2 files changed, 9 insertions(+), 3 deletions(-)
> 
> diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
> index 120ef99d09e8..916a3f782950 100644
> --- a/fs/xfs/xfs_buf.c
> +++ b/fs/xfs/xfs_buf.c
> @@ -345,16 +345,19 @@ xfs_buf_allocate_memory(
>  	unsigned short		page_count, i;
>  	xfs_off_t		start, end;
>  	int			error;
> +	uint			kmflag_mask = 0;
>  
>  	/*
>  	 * for buffers that are contained within a single page, just allocate
>  	 * the memory from the heap - there's no need for the complexity of
>  	 * page arrays to keep allocation down to order 0.
>  	 */
> +	if (flags & _XBF_KMZ)
> +		kmflag_mask |= KM_ZERO;
>  	size = BBTOB(bp->b_length);
>  	if (size < PAGE_SIZE) {
>  		int align_mask = xfs_buftarg_dma_alignment(bp->b_target);
> -		bp->b_addr = kmem_alloc_io(size, align_mask, KM_NOFS);
> +		bp->b_addr = kmem_alloc_io(size, align_mask, KM_NOFS | kmflag_mask);
>  		if (!bp->b_addr) {
>  			/* low memory - use alloc_page loop instead */
>  			goto use_alloc_page;
> @@ -391,7 +394,7 @@ xfs_buf_allocate_memory(
>  		struct page	*page;
>  		uint		retries = 0;
>  retry:
> -		page = alloc_page(gfp_mask);
> +		page = alloc_page(gfp_mask | kmflag_mask);
>  		if (unlikely(page == NULL)) {
>  			if (flags & XBF_READ_AHEAD) {
>  				bp->b_page_count = i;
> @@ -683,6 +686,7 @@ xfs_buf_get_map(
>  	struct xfs_buf		*new_bp;
>  	int			error = 0;
>  
> +	flags |= _XBF_KMZ;
>  	error = xfs_buf_find(target, map, nmaps, flags, NULL, &bp);

IIRC, this flag was supposed to go into xfs_trans_get_buf_map()
and direct callers of xfs_buf_get*() that weren't in the read path.
That avoids the need for zeroing pages that we are going to DMA
actual data into before it gets to users...

>  	switch (error) {
> diff --git a/fs/xfs/xfs_buf.h b/fs/xfs/xfs_buf.h
> index f6ce17d8d848..416ff588240a 100644
> --- a/fs/xfs/xfs_buf.h
> +++ b/fs/xfs/xfs_buf.h
> @@ -38,6 +38,7 @@
>  #define _XBF_PAGES	 (1 << 20)/* backed by refcounted pages */
>  #define _XBF_KMEM	 (1 << 21)/* backed by heap memory */
>  #define _XBF_DELWRI_Q	 (1 << 22)/* buffer on a delwri queue */
> +#define _XBF_KMZ	 (1 << 23)/* zeroed buffer required */

"KMZ" isn't very descriptive, and it shouldn't have a "_" prefix as
it's not internal to the buffer cache - it's a caller controlled
flag like XBF_TRYLOCK.

I'd suggest something like XBF_INIT_PAGES or XBF_ZERO to make it
clear we are asking for ithe buffer to be explicitly initialised
to zero.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v2] xfs: assure zeroed memory buffers for certain kmem allocations
  2019-09-16 15:35 [PATCH] xfs: assure zeroed memory buffers for certain kmem allocations Bill O'Donnell
  2019-09-16 21:24 ` Eric Sandeen
  2019-09-16 21:54 ` Dave Chinner
@ 2019-09-18 15:47 ` Bill O'Donnell
  2019-09-18 16:32   ` Darrick J. Wong
  2019-09-19 15:01 ` [PATCH v3] " Bill O'Donnell
  2019-09-19 21:01 ` [PATCH v4] " Bill O'Donnell
  4 siblings, 1 reply; 17+ messages in thread
From: Bill O'Donnell @ 2019-09-18 15:47 UTC (permalink / raw)
  To: linux-xfs; +Cc: darrick.wong

Guarantee zeroed memory buffers for cases where potential memory
leak to disk can occur. In these cases, kmem_alloc is used and
doesn't zero the buffer, opening the possibility of information
leakage to disk.

Introduce a xfs_buf_flag, XBF_ZERO, to indicate a request for a zeroed
buffer, and use existing infrastucture (xfs_buf_allocate_memory) to
obtain the already zeroed buffer from kernel memory.

This solution avoids the performance issue that would occur if a
wholesale change to replace kmem_alloc with kmem_zalloc was done.

Signed-off-by: Bill O'Donnell <billodo@redhat.com>
---
v2: zeroed buffer not required for XBF_READ case. Correct placement
    and rename the XBF_ZERO flag.

 fs/xfs/xfs_buf.c       | 9 +++++++--
 fs/xfs/xfs_buf.h       | 2 ++
 fs/xfs/xfs_trans_buf.c | 2 ++
 3 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
index 120ef99d09e8..0d96efff451e 100644
--- a/fs/xfs/xfs_buf.c
+++ b/fs/xfs/xfs_buf.c
@@ -345,6 +345,10 @@ xfs_buf_allocate_memory(
 	unsigned short		page_count, i;
 	xfs_off_t		start, end;
 	int			error;
+	uint			kmflag_mask = 0;
+
+	if ((flags & XBF_ZERO) && !(flags & XBF_READ))
+		kmflag_mask |= KM_ZERO;
 
 	/*
 	 * for buffers that are contained within a single page, just allocate
@@ -354,7 +358,8 @@ xfs_buf_allocate_memory(
 	size = BBTOB(bp->b_length);
 	if (size < PAGE_SIZE) {
 		int align_mask = xfs_buftarg_dma_alignment(bp->b_target);
-		bp->b_addr = kmem_alloc_io(size, align_mask, KM_NOFS);
+		bp->b_addr = kmem_alloc_io(size, align_mask,
+					   KM_NOFS | kmflag_mask);
 		if (!bp->b_addr) {
 			/* low memory - use alloc_page loop instead */
 			goto use_alloc_page;
@@ -391,7 +396,7 @@ xfs_buf_allocate_memory(
 		struct page	*page;
 		uint		retries = 0;
 retry:
-		page = alloc_page(gfp_mask);
+		page = alloc_page(gfp_mask | kmflag_mask);
 		if (unlikely(page == NULL)) {
 			if (flags & XBF_READ_AHEAD) {
 				bp->b_page_count = i;
diff --git a/fs/xfs/xfs_buf.h b/fs/xfs/xfs_buf.h
index f6ce17d8d848..dccdd653c9dc 100644
--- a/fs/xfs/xfs_buf.h
+++ b/fs/xfs/xfs_buf.h
@@ -33,6 +33,7 @@
 /* flags used only as arguments to access routines */
 #define XBF_TRYLOCK	 (1 << 16)/* lock requested, but do not wait */
 #define XBF_UNMAPPED	 (1 << 17)/* do not map the buffer */
+#define XBF_ZERO	 (1 << 18)/* zeroed buffer required */
 
 /* flags used only internally */
 #define _XBF_PAGES	 (1 << 20)/* backed by refcounted pages */
@@ -52,6 +53,7 @@ typedef unsigned int xfs_buf_flags_t;
 	{ XBF_WRITE_FAIL,	"WRITE_FAIL" }, \
 	{ XBF_TRYLOCK,		"TRYLOCK" },	/* should never be set */\
 	{ XBF_UNMAPPED,		"UNMAPPED" },	/* ditto */\
+	{ XBF_ZERO,		"KMEM_ZERO" }, \
 	{ _XBF_PAGES,		"PAGES" }, \
 	{ _XBF_KMEM,		"KMEM" }, \
 	{ _XBF_DELWRI_Q,	"DELWRI_Q" }
diff --git a/fs/xfs/xfs_trans_buf.c b/fs/xfs/xfs_trans_buf.c
index b5b3a78ef31c..087d413c1490 100644
--- a/fs/xfs/xfs_trans_buf.c
+++ b/fs/xfs/xfs_trans_buf.c
@@ -123,6 +123,8 @@ xfs_trans_get_buf_map(
 	xfs_buf_t		*bp;
 	struct xfs_buf_log_item	*bip;
 
+	flags |= XBF_ZERO;
+
 	if (!tp)
 		return xfs_buf_get_map(target, map, nmaps, flags);
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v2] xfs: assure zeroed memory buffers for certain kmem allocations
  2019-09-18 15:47 ` [PATCH v2] " Bill O'Donnell
@ 2019-09-18 16:32   ` Darrick J. Wong
  2019-09-18 22:11     ` Bill O'Donnell
  0 siblings, 1 reply; 17+ messages in thread
From: Darrick J. Wong @ 2019-09-18 16:32 UTC (permalink / raw)
  To: Bill O'Donnell; +Cc: linux-xfs

On Wed, Sep 18, 2019 at 10:47:33AM -0500, Bill O'Donnell wrote:
> Guarantee zeroed memory buffers for cases where potential memory
> leak to disk can occur. In these cases, kmem_alloc is used and
> doesn't zero the buffer, opening the possibility of information
> leakage to disk.
> 
> Introduce a xfs_buf_flag, XBF_ZERO, to indicate a request for a zeroed
> buffer, and use existing infrastucture (xfs_buf_allocate_memory) to
> obtain the already zeroed buffer from kernel memory.
> 
> This solution avoids the performance issue that would occur if a
> wholesale change to replace kmem_alloc with kmem_zalloc was done.
> 
> Signed-off-by: Bill O'Donnell <billodo@redhat.com>
> ---
> v2: zeroed buffer not required for XBF_READ case. Correct placement
>     and rename the XBF_ZERO flag.
> 
>  fs/xfs/xfs_buf.c       | 9 +++++++--
>  fs/xfs/xfs_buf.h       | 2 ++
>  fs/xfs/xfs_trans_buf.c | 2 ++
>  3 files changed, 11 insertions(+), 2 deletions(-)
> 
> diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
> index 120ef99d09e8..0d96efff451e 100644
> --- a/fs/xfs/xfs_buf.c
> +++ b/fs/xfs/xfs_buf.c
> @@ -345,6 +345,10 @@ xfs_buf_allocate_memory(
>  	unsigned short		page_count, i;
>  	xfs_off_t		start, end;
>  	int			error;
> +	uint			kmflag_mask = 0;
> +
> +	if ((flags & XBF_ZERO) && !(flags & XBF_READ))

The sole caller of xfs_buf_allocate_memory is xfs_buf_get_map.  If
_get_map is called from the *read_buf* functions, they pass in XBF_READ
to ensure buffer contents are read.  The other _get_map callers seem to
be initializing metadata blocks and do not set XBF_READ.

So I wonder, do you even need XBF_ZERO?  Or could this be reduced to:

	if (!(flags & XBF_READ))
		km_flag_mask |= KM_ZERO;

?

--D

> +		kmflag_mask |= KM_ZERO;
>  
>  	/*
>  	 * for buffers that are contained within a single page, just allocate
> @@ -354,7 +358,8 @@ xfs_buf_allocate_memory(
>  	size = BBTOB(bp->b_length);
>  	if (size < PAGE_SIZE) {
>  		int align_mask = xfs_buftarg_dma_alignment(bp->b_target);
> -		bp->b_addr = kmem_alloc_io(size, align_mask, KM_NOFS);
> +		bp->b_addr = kmem_alloc_io(size, align_mask,
> +					   KM_NOFS | kmflag_mask);
>  		if (!bp->b_addr) {
>  			/* low memory - use alloc_page loop instead */
>  			goto use_alloc_page;
> @@ -391,7 +396,7 @@ xfs_buf_allocate_memory(
>  		struct page	*page;
>  		uint		retries = 0;
>  retry:
> -		page = alloc_page(gfp_mask);
> +		page = alloc_page(gfp_mask | kmflag_mask);
>  		if (unlikely(page == NULL)) {
>  			if (flags & XBF_READ_AHEAD) {
>  				bp->b_page_count = i;
> diff --git a/fs/xfs/xfs_buf.h b/fs/xfs/xfs_buf.h
> index f6ce17d8d848..dccdd653c9dc 100644
> --- a/fs/xfs/xfs_buf.h
> +++ b/fs/xfs/xfs_buf.h
> @@ -33,6 +33,7 @@
>  /* flags used only as arguments to access routines */
>  #define XBF_TRYLOCK	 (1 << 16)/* lock requested, but do not wait */
>  #define XBF_UNMAPPED	 (1 << 17)/* do not map the buffer */
> +#define XBF_ZERO	 (1 << 18)/* zeroed buffer required */
>  
>  /* flags used only internally */
>  #define _XBF_PAGES	 (1 << 20)/* backed by refcounted pages */
> @@ -52,6 +53,7 @@ typedef unsigned int xfs_buf_flags_t;
>  	{ XBF_WRITE_FAIL,	"WRITE_FAIL" }, \
>  	{ XBF_TRYLOCK,		"TRYLOCK" },	/* should never be set */\
>  	{ XBF_UNMAPPED,		"UNMAPPED" },	/* ditto */\
> +	{ XBF_ZERO,		"KMEM_ZERO" }, \
>  	{ _XBF_PAGES,		"PAGES" }, \
>  	{ _XBF_KMEM,		"KMEM" }, \
>  	{ _XBF_DELWRI_Q,	"DELWRI_Q" }
> diff --git a/fs/xfs/xfs_trans_buf.c b/fs/xfs/xfs_trans_buf.c
> index b5b3a78ef31c..087d413c1490 100644
> --- a/fs/xfs/xfs_trans_buf.c
> +++ b/fs/xfs/xfs_trans_buf.c
> @@ -123,6 +123,8 @@ xfs_trans_get_buf_map(
>  	xfs_buf_t		*bp;
>  	struct xfs_buf_log_item	*bip;
>  
> +	flags |= XBF_ZERO;
> +
>  	if (!tp)
>  		return xfs_buf_get_map(target, map, nmaps, flags);
>  
> -- 
> 2.21.0
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2] xfs: assure zeroed memory buffers for certain kmem allocations
  2019-09-18 16:32   ` Darrick J. Wong
@ 2019-09-18 22:11     ` Bill O'Donnell
  0 siblings, 0 replies; 17+ messages in thread
From: Bill O'Donnell @ 2019-09-18 22:11 UTC (permalink / raw)
  To: Darrick J. Wong; +Cc: linux-xfs

On Wed, Sep 18, 2019 at 09:32:13AM -0700, Darrick J. Wong wrote:
> On Wed, Sep 18, 2019 at 10:47:33AM -0500, Bill O'Donnell wrote:
> > Guarantee zeroed memory buffers for cases where potential memory
> > leak to disk can occur. In these cases, kmem_alloc is used and
> > doesn't zero the buffer, opening the possibility of information
> > leakage to disk.
> > 
> > Introduce a xfs_buf_flag, XBF_ZERO, to indicate a request for a zeroed
> > buffer, and use existing infrastucture (xfs_buf_allocate_memory) to
> > obtain the already zeroed buffer from kernel memory.
> > 
> > This solution avoids the performance issue that would occur if a
> > wholesale change to replace kmem_alloc with kmem_zalloc was done.
> > 
> > Signed-off-by: Bill O'Donnell <billodo@redhat.com>
> > ---
> > v2: zeroed buffer not required for XBF_READ case. Correct placement
> >     and rename the XBF_ZERO flag.
> > 
> >  fs/xfs/xfs_buf.c       | 9 +++++++--
> >  fs/xfs/xfs_buf.h       | 2 ++
> >  fs/xfs/xfs_trans_buf.c | 2 ++
> >  3 files changed, 11 insertions(+), 2 deletions(-)
> > 
> > diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
> > index 120ef99d09e8..0d96efff451e 100644
> > --- a/fs/xfs/xfs_buf.c
> > +++ b/fs/xfs/xfs_buf.c
> > @@ -345,6 +345,10 @@ xfs_buf_allocate_memory(
> >  	unsigned short		page_count, i;
> >  	xfs_off_t		start, end;
> >  	int			error;
> > +	uint			kmflag_mask = 0;
> > +
> > +	if ((flags & XBF_ZERO) && !(flags & XBF_READ))
> 
> The sole caller of xfs_buf_allocate_memory is xfs_buf_get_map.  If
> _get_map is called from the *read_buf* functions, they pass in XBF_READ
> to ensure buffer contents are read.  The other _get_map callers seem to
> be initializing metadata blocks and do not set XBF_READ.
> 
> So I wonder, do you even need XBF_ZERO?  Or could this be reduced to:
> 
> 	if (!(flags & XBF_READ))
> 		km_flag_mask |= KM_ZERO;
> 
> ?
> 
> --D

Yeah, I see no cases of XBF_ZERO true AND XBF_READ true.
Anytime XBF_ZERO is true, XBF_READ is false.
So, yes, it could be reduced by eliminating XBF_ZERO.

Thanks,
Bill

> 
> > +		kmflag_mask |= KM_ZERO;
> >  
> >  	/*
> >  	 * for buffers that are contained within a single page, just allocate
> > @@ -354,7 +358,8 @@ xfs_buf_allocate_memory(
> >  	size = BBTOB(bp->b_length);
> >  	if (size < PAGE_SIZE) {
> >  		int align_mask = xfs_buftarg_dma_alignment(bp->b_target);
> > -		bp->b_addr = kmem_alloc_io(size, align_mask, KM_NOFS);
> > +		bp->b_addr = kmem_alloc_io(size, align_mask,
> > +					   KM_NOFS | kmflag_mask);
> >  		if (!bp->b_addr) {
> >  			/* low memory - use alloc_page loop instead */
> >  			goto use_alloc_page;
> > @@ -391,7 +396,7 @@ xfs_buf_allocate_memory(
> >  		struct page	*page;
> >  		uint		retries = 0;
> >  retry:
> > -		page = alloc_page(gfp_mask);
> > +		page = alloc_page(gfp_mask | kmflag_mask);
> >  		if (unlikely(page == NULL)) {
> >  			if (flags & XBF_READ_AHEAD) {
> >  				bp->b_page_count = i;
> > diff --git a/fs/xfs/xfs_buf.h b/fs/xfs/xfs_buf.h
> > index f6ce17d8d848..dccdd653c9dc 100644
> > --- a/fs/xfs/xfs_buf.h
> > +++ b/fs/xfs/xfs_buf.h
> > @@ -33,6 +33,7 @@
> >  /* flags used only as arguments to access routines */
> >  #define XBF_TRYLOCK	 (1 << 16)/* lock requested, but do not wait */
> >  #define XBF_UNMAPPED	 (1 << 17)/* do not map the buffer */
> > +#define XBF_ZERO	 (1 << 18)/* zeroed buffer required */
> >  
> >  /* flags used only internally */
> >  #define _XBF_PAGES	 (1 << 20)/* backed by refcounted pages */
> > @@ -52,6 +53,7 @@ typedef unsigned int xfs_buf_flags_t;
> >  	{ XBF_WRITE_FAIL,	"WRITE_FAIL" }, \
> >  	{ XBF_TRYLOCK,		"TRYLOCK" },	/* should never be set */\
> >  	{ XBF_UNMAPPED,		"UNMAPPED" },	/* ditto */\
> > +	{ XBF_ZERO,		"KMEM_ZERO" }, \
> >  	{ _XBF_PAGES,		"PAGES" }, \
> >  	{ _XBF_KMEM,		"KMEM" }, \
> >  	{ _XBF_DELWRI_Q,	"DELWRI_Q" }
> > diff --git a/fs/xfs/xfs_trans_buf.c b/fs/xfs/xfs_trans_buf.c
> > index b5b3a78ef31c..087d413c1490 100644
> > --- a/fs/xfs/xfs_trans_buf.c
> > +++ b/fs/xfs/xfs_trans_buf.c
> > @@ -123,6 +123,8 @@ xfs_trans_get_buf_map(
> >  	xfs_buf_t		*bp;
> >  	struct xfs_buf_log_item	*bip;
> >  
> > +	flags |= XBF_ZERO;
> > +
> >  	if (!tp)
> >  		return xfs_buf_get_map(target, map, nmaps, flags);
> >  
> > -- 
> > 2.21.0
> > 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v3] xfs: assure zeroed memory buffers for certain kmem allocations
  2019-09-16 15:35 [PATCH] xfs: assure zeroed memory buffers for certain kmem allocations Bill O'Donnell
                   ` (2 preceding siblings ...)
  2019-09-18 15:47 ` [PATCH v2] " Bill O'Donnell
@ 2019-09-19 15:01 ` Bill O'Donnell
  2019-09-19 17:03   ` Christoph Hellwig
  2019-09-19 21:01 ` [PATCH v4] " Bill O'Donnell
  4 siblings, 1 reply; 17+ messages in thread
From: Bill O'Donnell @ 2019-09-19 15:01 UTC (permalink / raw)
  To: linux-xfs

Guarantee zeroed memory buffers for cases where potential memory
leak to disk can occur. In these cases, kmem_alloc is used and
doesn't zero the buffer, opening the possibility of information
leakage to disk.

Use existing infrastucture (xfs_buf_allocate_memory) to obtain
the already zeroed buffer from kernel memory.

This solution avoids the performance issue that would occur if a
wholesale change to replace kmem_alloc with kmem_zalloc was done.

Signed-off-by: Bill O'Donnell <billodo@redhat.com>
---
v3: remove XBF_ZERO flag, and instead use XBF_READ flag only.
v2: zeroed buffer not required for XBF_READ case. Correct placement
    and rename the XBF_ZERO flag.


fs/xfs/xfs_buf.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
index 120ef99d09e8..6fbe63f34a68 100644
--- a/fs/xfs/xfs_buf.c
+++ b/fs/xfs/xfs_buf.c
@@ -345,6 +345,10 @@ xfs_buf_allocate_memory(
 	unsigned short		page_count, i;
 	xfs_off_t		start, end;
 	int			error;
+	uint			kmflag_mask = 0;
+
+	if (!(flags & XBF_READ))
+		kmflag_mask |= KM_ZERO;
 
 	/*
 	 * for buffers that are contained within a single page, just allocate
@@ -354,7 +358,8 @@ xfs_buf_allocate_memory(
 	size = BBTOB(bp->b_length);
 	if (size < PAGE_SIZE) {
 		int align_mask = xfs_buftarg_dma_alignment(bp->b_target);
-		bp->b_addr = kmem_alloc_io(size, align_mask, KM_NOFS);
+		bp->b_addr = kmem_alloc_io(size, align_mask,
+					   KM_NOFS | kmflag_mask);
 		if (!bp->b_addr) {
 			/* low memory - use alloc_page loop instead */
 			goto use_alloc_page;
@@ -391,7 +396,7 @@ xfs_buf_allocate_memory(
 		struct page	*page;
 		uint		retries = 0;
 retry:
-		page = alloc_page(gfp_mask);
+		page = alloc_page(gfp_mask | kmflag_mask);
 		if (unlikely(page == NULL)) {
 			if (flags & XBF_READ_AHEAD) {
 				bp->b_page_count = i;
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v3] xfs: assure zeroed memory buffers for certain kmem allocations
  2019-09-19 15:01 ` [PATCH v3] " Bill O'Donnell
@ 2019-09-19 17:03   ` Christoph Hellwig
  2019-09-19 17:20     ` Bill O'Donnell
  0 siblings, 1 reply; 17+ messages in thread
From: Christoph Hellwig @ 2019-09-19 17:03 UTC (permalink / raw)
  To: Bill O'Donnell; +Cc: linux-xfs

On Thu, Sep 19, 2019 at 10:01:54AM -0500, Bill O'Donnell wrote:
> +	uint			kmflag_mask = 0;
> +
> +	if (!(flags & XBF_READ))
> +		kmflag_mask |= KM_ZERO;

> @@ -391,7 +396,7 @@ xfs_buf_allocate_memory(
>  		struct page	*page;
>  		uint		retries = 0;
>  retry:
> -		page = alloc_page(gfp_mask);
> +		page = alloc_page(gfp_mask | kmflag_mask);

alloc_page takes GFP_ flags, not KM_.  In fact sparse should have warned
about this.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3] xfs: assure zeroed memory buffers for certain kmem allocations
  2019-09-19 17:03   ` Christoph Hellwig
@ 2019-09-19 17:20     ` Bill O'Donnell
  2019-09-19 17:38       ` Christoph Hellwig
  0 siblings, 1 reply; 17+ messages in thread
From: Bill O'Donnell @ 2019-09-19 17:20 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-xfs

On Thu, Sep 19, 2019 at 10:03:53AM -0700, Christoph Hellwig wrote:
> On Thu, Sep 19, 2019 at 10:01:54AM -0500, Bill O'Donnell wrote:
> > +	uint			kmflag_mask = 0;
> > +
> > +	if (!(flags & XBF_READ))
> > +		kmflag_mask |= KM_ZERO;
> 
> > @@ -391,7 +396,7 @@ xfs_buf_allocate_memory(
> >  		struct page	*page;
> >  		uint		retries = 0;
> >  retry:
> > -		page = alloc_page(gfp_mask);
> > +		page = alloc_page(gfp_mask | kmflag_mask);
> 
> alloc_page takes GFP_ flags, not KM_.  In fact sparse should have warned
> about this.

I wondered if the KM flag needed conversion to GFP, but saw no warning.
Thanks-
Bill

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3] xfs: assure zeroed memory buffers for certain kmem allocations
  2019-09-19 17:20     ` Bill O'Donnell
@ 2019-09-19 17:38       ` Christoph Hellwig
  2019-09-20 14:59         ` Eric Sandeen
  0 siblings, 1 reply; 17+ messages in thread
From: Christoph Hellwig @ 2019-09-19 17:38 UTC (permalink / raw)
  To: Bill O'Donnell; +Cc: Christoph Hellwig, linux-xfs

On Thu, Sep 19, 2019 at 12:20:47PM -0500, Bill O'Donnell wrote:
> > > @@ -391,7 +396,7 @@ xfs_buf_allocate_memory(
> > >  		struct page	*page;
> > >  		uint		retries = 0;
> > >  retry:
> > > -		page = alloc_page(gfp_mask);
> > > +		page = alloc_page(gfp_mask | kmflag_mask);
> > 
> > alloc_page takes GFP_ flags, not KM_.  In fact sparse should have warned
> > about this.
> 
> I wondered if the KM flag needed conversion to GFP, but saw no warning.

I'd be tempted to just do a manual memset after either kind of
allocation.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v4] xfs: assure zeroed memory buffers for certain kmem allocations
  2019-09-16 15:35 [PATCH] xfs: assure zeroed memory buffers for certain kmem allocations Bill O'Donnell
                   ` (3 preceding siblings ...)
  2019-09-19 15:01 ` [PATCH v3] " Bill O'Donnell
@ 2019-09-19 21:01 ` Bill O'Donnell
  2019-09-24  5:47   ` Darrick J. Wong
  2019-09-24 15:17   ` Darrick J. Wong
  4 siblings, 2 replies; 17+ messages in thread
From: Bill O'Donnell @ 2019-09-19 21:01 UTC (permalink / raw)
  To: linux-xfs

Guarantee zeroed memory buffers for cases where potential memory
leak to disk can occur. In these cases, kmem_alloc is used and
doesn't zero the buffer, opening the possibility of information
leakage to disk.

Use existing infrastucture (xfs_buf_allocate_memory) to obtain
the already zeroed buffer from kernel memory.

This solution avoids the performance issue that would occur if a
wholesale change to replace kmem_alloc with kmem_zalloc was done.

Signed-off-by: Bill O'Donnell <billodo@redhat.com>
---
v4: use __GFP_ZERO as part of gfp_mask (instead of KM_ZERO)
v3: remove XBF_ZERO flag, and instead use XBF_READ flag only.
v2: zeroed buffer not required for XBF_READ case. Correct placement
    and rename the XBF_ZERO flag.

 fs/xfs/xfs_buf.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
index 120ef99d09e8..5d0a68de5fa6 100644
--- a/fs/xfs/xfs_buf.c
+++ b/fs/xfs/xfs_buf.c
@@ -345,6 +345,15 @@ xfs_buf_allocate_memory(
 	unsigned short		page_count, i;
 	xfs_off_t		start, end;
 	int			error;
+	uint			kmflag_mask = 0;
+
+	/*
+	 * assure zeroed buffer for non-read cases.
+	 */
+	if (!(flags & XBF_READ)) {
+		kmflag_mask |= KM_ZERO;
+		gfp_mask |= __GFP_ZERO;
+	}
 
 	/*
 	 * for buffers that are contained within a single page, just allocate
@@ -354,7 +363,8 @@ xfs_buf_allocate_memory(
 	size = BBTOB(bp->b_length);
 	if (size < PAGE_SIZE) {
 		int align_mask = xfs_buftarg_dma_alignment(bp->b_target);
-		bp->b_addr = kmem_alloc_io(size, align_mask, KM_NOFS);
+		bp->b_addr = kmem_alloc_io(size, align_mask,
+					   KM_NOFS | kmflag_mask);
 		if (!bp->b_addr) {
 			/* low memory - use alloc_page loop instead */
 			goto use_alloc_page;
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v3] xfs: assure zeroed memory buffers for certain kmem allocations
  2019-09-19 17:38       ` Christoph Hellwig
@ 2019-09-20 14:59         ` Eric Sandeen
  2019-09-24  4:13           ` Dave Chinner
  0 siblings, 1 reply; 17+ messages in thread
From: Eric Sandeen @ 2019-09-20 14:59 UTC (permalink / raw)
  To: Christoph Hellwig, Bill O'Donnell; +Cc: linux-xfs

On 9/19/19 12:38 PM, Christoph Hellwig wrote:
> On Thu, Sep 19, 2019 at 12:20:47PM -0500, Bill O'Donnell wrote:
>>>> @@ -391,7 +396,7 @@ xfs_buf_allocate_memory(
>>>>  		struct page	*page;
>>>>  		uint		retries = 0;
>>>>  retry:
>>>> -		page = alloc_page(gfp_mask);
>>>> +		page = alloc_page(gfp_mask | kmflag_mask);
>>>
>>> alloc_page takes GFP_ flags, not KM_.  In fact sparse should have warned
>>> about this.
>>
>> I wondered if the KM flag needed conversion to GFP, but saw no warning.
> 
> I'd be tempted to just do a manual memset after either kind of
> allocation.

At some point I think Dave had suggested that at least when allocating pages,
using the flag would be more efficient?

-Eric

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3] xfs: assure zeroed memory buffers for certain kmem allocations
  2019-09-20 14:59         ` Eric Sandeen
@ 2019-09-24  4:13           ` Dave Chinner
  0 siblings, 0 replies; 17+ messages in thread
From: Dave Chinner @ 2019-09-24  4:13 UTC (permalink / raw)
  To: Eric Sandeen; +Cc: Christoph Hellwig, Bill O'Donnell, linux-xfs

On Fri, Sep 20, 2019 at 09:59:41AM -0500, Eric Sandeen wrote:
> On 9/19/19 12:38 PM, Christoph Hellwig wrote:
> > On Thu, Sep 19, 2019 at 12:20:47PM -0500, Bill O'Donnell wrote:
> >>>> @@ -391,7 +396,7 @@ xfs_buf_allocate_memory(
> >>>>  		struct page	*page;
> >>>>  		uint		retries = 0;
> >>>>  retry:
> >>>> -		page = alloc_page(gfp_mask);
> >>>> +		page = alloc_page(gfp_mask | kmflag_mask);
> >>>
> >>> alloc_page takes GFP_ flags, not KM_.  In fact sparse should have warned
> >>> about this.
> >>
> >> I wondered if the KM flag needed conversion to GFP, but saw no warning.
> > 
> > I'd be tempted to just do a manual memset after either kind of
> > allocation.
> 
> At some point I think Dave had suggested that at least when allocating pages,
> using the flag would be more efficient?

With some configurations pages come from the free lists pre-zeroed,
and so don't need zeroing to initialise them (e.g. when memory
poisoning is turned on, or pages are being zeroed on free). Hence if
you use __GFP_ZERO the it will only zero if the page obtained from
the freelist isn't already zero. The __GFP_ZERO call will also use
the most efficient method of zeroing the page for the platform via
clear_page() rather than memset()....

/me shrugs and doesn't really care either way....

-Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4] xfs: assure zeroed memory buffers for certain kmem allocations
  2019-09-19 21:01 ` [PATCH v4] " Bill O'Donnell
@ 2019-09-24  5:47   ` Darrick J. Wong
  2019-09-24 15:17   ` Darrick J. Wong
  1 sibling, 0 replies; 17+ messages in thread
From: Darrick J. Wong @ 2019-09-24  5:47 UTC (permalink / raw)
  To: Bill O'Donnell; +Cc: linux-xfs

On Thu, Sep 19, 2019 at 04:01:38PM -0500, Bill O'Donnell wrote:
> Guarantee zeroed memory buffers for cases where potential memory
> leak to disk can occur. In these cases, kmem_alloc is used and
> doesn't zero the buffer, opening the possibility of information
> leakage to disk.
> 
> Use existing infrastucture (xfs_buf_allocate_memory) to obtain
> the already zeroed buffer from kernel memory.
> 
> This solution avoids the performance issue that would occur if a
> wholesale change to replace kmem_alloc with kmem_zalloc was done.
> 
> Signed-off-by: Bill O'Donnell <billodo@redhat.com>
> ---
> v4: use __GFP_ZERO as part of gfp_mask (instead of KM_ZERO)
> v3: remove XBF_ZERO flag, and instead use XBF_READ flag only.
> v2: zeroed buffer not required for XBF_READ case. Correct placement
>     and rename the XBF_ZERO flag.
> 
>  fs/xfs/xfs_buf.c | 12 +++++++++++-
>  1 file changed, 11 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
> index 120ef99d09e8..5d0a68de5fa6 100644
> --- a/fs/xfs/xfs_buf.c
> +++ b/fs/xfs/xfs_buf.c
> @@ -345,6 +345,15 @@ xfs_buf_allocate_memory(
>  	unsigned short		page_count, i;
>  	xfs_off_t		start, end;
>  	int			error;
> +	uint			kmflag_mask = 0;
> +
> +	/*
> +	 * assure zeroed buffer for non-read cases.
> +	 */
> +	if (!(flags & XBF_READ)) {
> +		kmflag_mask |= KM_ZERO;
> +		gfp_mask |= __GFP_ZERO;
> +	}

Jeez it feels grody to have to set two different flags variables just to
get __GFP_ZERO consistently but I'll run it through xfstests overnight.

Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>

--D

>  
>  	/*
>  	 * for buffers that are contained within a single page, just allocate
> @@ -354,7 +363,8 @@ xfs_buf_allocate_memory(
>  	size = BBTOB(bp->b_length);
>  	if (size < PAGE_SIZE) {
>  		int align_mask = xfs_buftarg_dma_alignment(bp->b_target);
> -		bp->b_addr = kmem_alloc_io(size, align_mask, KM_NOFS);
> +		bp->b_addr = kmem_alloc_io(size, align_mask,
> +					   KM_NOFS | kmflag_mask);
>  		if (!bp->b_addr) {
>  			/* low memory - use alloc_page loop instead */
>  			goto use_alloc_page;
> -- 
> 2.21.0
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4] xfs: assure zeroed memory buffers for certain kmem allocations
  2019-09-19 21:01 ` [PATCH v4] " Bill O'Donnell
  2019-09-24  5:47   ` Darrick J. Wong
@ 2019-09-24 15:17   ` Darrick J. Wong
  1 sibling, 0 replies; 17+ messages in thread
From: Darrick J. Wong @ 2019-09-24 15:17 UTC (permalink / raw)
  To: Bill O'Donnell; +Cc: linux-xfs

On Thu, Sep 19, 2019 at 04:01:38PM -0500, Bill O'Donnell wrote:
> Guarantee zeroed memory buffers for cases where potential memory
> leak to disk can occur. In these cases, kmem_alloc is used and
> doesn't zero the buffer, opening the possibility of information
> leakage to disk.
> 
> Use existing infrastucture (xfs_buf_allocate_memory) to obtain
> the already zeroed buffer from kernel memory.
> 
> This solution avoids the performance issue that would occur if a
> wholesale change to replace kmem_alloc with kmem_zalloc was done.
> 
> Signed-off-by: Bill O'Donnell <billodo@redhat.com>
> ---
> v4: use __GFP_ZERO as part of gfp_mask (instead of KM_ZERO)
> v3: remove XBF_ZERO flag, and instead use XBF_READ flag only.
> v2: zeroed buffer not required for XBF_READ case. Correct placement
>     and rename the XBF_ZERO flag.
> 
>  fs/xfs/xfs_buf.c | 12 +++++++++++-

/me wakes up and wonders, what about the other kmem_alloc_io callers in
xfs?  I think we need to slip a KM_ZERO into the allocation call when we
allocate the log buffers, right?  What about log recovery?

--D

>  1 file changed, 11 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
> index 120ef99d09e8..5d0a68de5fa6 100644
> --- a/fs/xfs/xfs_buf.c
> +++ b/fs/xfs/xfs_buf.c
> @@ -345,6 +345,15 @@ xfs_buf_allocate_memory(
>  	unsigned short		page_count, i;
>  	xfs_off_t		start, end;
>  	int			error;
> +	uint			kmflag_mask = 0;
> +
> +	/*
> +	 * assure zeroed buffer for non-read cases.
> +	 */
> +	if (!(flags & XBF_READ)) {
> +		kmflag_mask |= KM_ZERO;
> +		gfp_mask |= __GFP_ZERO;
> +	}
>  
>  	/*
>  	 * for buffers that are contained within a single page, just allocate
> @@ -354,7 +363,8 @@ xfs_buf_allocate_memory(
>  	size = BBTOB(bp->b_length);
>  	if (size < PAGE_SIZE) {
>  		int align_mask = xfs_buftarg_dma_alignment(bp->b_target);
> -		bp->b_addr = kmem_alloc_io(size, align_mask, KM_NOFS);
> +		bp->b_addr = kmem_alloc_io(size, align_mask,
> +					   KM_NOFS | kmflag_mask);
>  		if (!bp->b_addr) {
>  			/* low memory - use alloc_page loop instead */
>  			goto use_alloc_page;
> -- 
> 2.21.0
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2019-09-24 15:17 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-16 15:35 [PATCH] xfs: assure zeroed memory buffers for certain kmem allocations Bill O'Donnell
2019-09-16 21:24 ` Eric Sandeen
2019-09-16 21:30   ` Darrick J. Wong
2019-09-16 21:32   ` Bill O'Donnell
2019-09-16 21:54 ` Dave Chinner
2019-09-18 15:47 ` [PATCH v2] " Bill O'Donnell
2019-09-18 16:32   ` Darrick J. Wong
2019-09-18 22:11     ` Bill O'Donnell
2019-09-19 15:01 ` [PATCH v3] " Bill O'Donnell
2019-09-19 17:03   ` Christoph Hellwig
2019-09-19 17:20     ` Bill O'Donnell
2019-09-19 17:38       ` Christoph Hellwig
2019-09-20 14:59         ` Eric Sandeen
2019-09-24  4:13           ` Dave Chinner
2019-09-19 21:01 ` [PATCH v4] " Bill O'Donnell
2019-09-24  5:47   ` Darrick J. Wong
2019-09-24 15:17   ` Darrick J. Wong

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.