All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: "Verma, Vishal L" <vishal.l.verma@intel.com>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	"jack@suse.cz" <jack@suse.cz>, "axboe@fb.com" <axboe@fb.com>,
	"linux-nvdimm@ml01.01.org" <linux-nvdimm@ml01.01.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xfs@oss.sgi.com" <xfs@oss.sgi.com>,
	"hch@infradead.org" <hch@infradead.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	"Wilcox, Matthew R" <matthew.r.wilcox@intel.com>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
	"linux-ext4@vger.kernel.org" <linux-ext4@vger.kernel.org>,
	"viro@zeniv.linux.org.uk" <viro@zeniv.linux.org.uk>
Subject: Re: [PATCH v2 5/5] dax: handle media errors in dax_do_io
Date: Tue, 26 Apr 2016 10:11:57 +1000	[thread overview]
Message-ID: <20160426001157.GE18496@dastard> (raw)
In-Reply-To: <CAPcyv4i6iwm1iY2mQ5yRbYfRexQroUX_R0B-db4ROU837fratw@mail.gmail.com>

On Mon, Apr 25, 2016 at 04:43:14PM -0700, Dan Williams wrote:
> On Mon, Apr 25, 2016 at 4:25 PM, Dave Chinner <david@fromorbit.com> wrote:
> > On Mon, Apr 25, 2016 at 05:14:36PM +0000, Verma, Vishal L wrote:
> >> On Mon, 2016-04-25 at 01:31 -0700, hch@infradead.org wrote:
> >> > On Sat, Apr 23, 2016 at 06:08:37PM +0000, Verma, Vishal L wrote:
> >> > >
> >> > > direct_IO might fail with -EINVAL due to misalignment, or -ENOMEM
> >> > > due
> >> > > to some allocation failing, and I thought we should return the
> >> > > original
> >> > > -EIO in such cases so that the application doesn't lose the
> >> > > information
> >> > > that the bad block is actually causing the error.
> >> > EINVAL is a concern here.  Not due to the right error reported, but
> >> > because it means your current scheme is fundamentally broken - we
> >> > need to support I/O at any alignment for DAX I/O, and not fail due to
> >> > alignbment concernes for a highly specific degraded case.
> >> >
> >> > I think this whole series need to go back to the drawing board as I
> >> > don't think it can actually rely on using direct I/O as the EIO
> >> > fallback.
> >> >
> >> Agreed that DAX I/O can happen with any size/alignment, but how else do
> >> we send an IO through the driver without alignment restrictions? Also,
> >> the granularity at which we store badblocks is 512B sectors, so it
> >> seems natural that to clear such a sector, you'd expect to send a write
> >> to the whole sector.
> >>
> >> The expected usage flow is:
> >>
> >> - Application hits EIO doing dax_IO or load/store io
> >>
> >> - It checks badblocks and discovers it's files have lost data
> >
> > Lots of hand-waving here. How does the application map a bad
> > "sector" to a file without scanning the entire filesystem to find
> > the owner of the bad sector?
> >
> >> - It write()s those sectors (possibly converted to file offsets using
> >> fiemap)
> >>     * This triggers the fallback path, but if the application is doing
> >> this level of recovery, it will know the sector is bad, and write the
> >> entire sector
> >
> > Where does the application find the data that was lost to be able to
> > rewrite it?
> >
> >> - Or it replaces the entire file from backup also using write() (not
> >> mmap+stores)
> >>     * This just frees the fs block, and the next time the block is
> >> reallocated by the fs, it will likely be zeroed first, and that will be
> >> done through the driver and will clear errors
> >
> > There's an implicit assumption that applications will keep redundant
> > copies of their data at the /application layer/ and be able to
> > automatically repair it? And then there's the implicit assumption
> > that it will unlink and free the entire file before writing a new
> > copy, and that then assumes the the filesystem will zero blocks if
> > they get reused to clear errors on that LBA sector mapping before
> > they are accessible again to userspace..
> >
> > It seems to me that there are a number of assumptions being made
> > across multiple layers here. Maybe I've missed something - can you
> > point me to the design/architecture description so I can see how
> > "app does data recovery itself" dance is supposed to work?
> >
> 
> Maybe I missed something, but all these assumptions are already
> present for typical block devices, i.e. sectors may go bad and a write
> may make the sector usable again.

The assumption we make about sectors going bad on SSDs or SRDs is
that the device is about to die and needs replacing ASAP. Then
RAID takes care of the rebuild completely transparently. i.e.
handling and correcting bad sectors is typically done completely
transparently /below/ the filesytem like so:

Application
Filesystem
block
[LBA mapping/redundancy/correction driver e.g. md/dm]
driver
hardware
[LBA redundancy/correction e.g h/w RAID]

In the case of filesystems with their own RAID/redundancy code (e.g.
btrfs), then it looks like this:

Application
Filesystem
mapping/redundancy/correction driver
block
driver
hardware
[LBA redundancy/correction e.g h/w RAID]

> This patch series is extending that
> out to the DAX-mmap case, but it's the same principle of "write to
> clear error" that we live with in the block-I/O path.  What
> clarification are you looking for beyond that point?

I'm asking for an actual design document that explains how moving
all the redundancy and bad sector correction stuff from the LBA
layer up into application space is supposed to work when
applications have no clue about LBA mappings, nor tend to keep
redundant data around. i.e. you're proposing this:

Application
Application data redundancy/correction
Filesystem
Block
[LBA mapping/redundancy/correction driver e.g. md/dm]
driver
hardware

And somehow all the error information from the hardware layer needs
to be propagated up to the application layer, along with all the
mapping information from the filesystem and block layers for the
application to make sense of the hardware reported errors.

I see assumptions this this "just works" but we don't have any of
the relevant APIs or infrastructure to enable the application to do
the hardware error->file+offset namespace mapping (i.e. filesystem
reverse mapping for for file offsets and directory paths, and
reverse mapping for the the block layer remapping drivers).

I haven't seen any design/documentation for infrastructure at the
application layer to handle redundant data and correctly
transparently so I don't have any idea what the technical
requirements this different IO stack places on filesystems may be.
Hence I'm asking for some kind of architecture/design documentation
that I can read to understand exactly what is being proposed here...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

WARNING: multiple messages have this Message-ID (diff)
From: Dave Chinner <david@fromorbit.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: "Verma, Vishal L" <vishal.l.verma@intel.com>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	"jack@suse.cz" <jack@suse.cz>, "axboe@fb.com" <axboe@fb.com>,
	"linux-nvdimm@ml01.01.org" <linux-nvdimm@ml01.01.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xfs@oss.sgi.com" <xfs@oss.sgi.com>,
	"hch@infradead.org" <hch@infradead.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	"Wilcox, Matthew R" <matthew.r.wilcox@intel.com>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
	"linux-ext4@vger.kernel.org" <linux-ext4@vger.kernel.org>,
	"viro@zeniv.linux.org.uk" <viro@zeniv.linux.org.uk>
Subject: Re: [PATCH v2 5/5] dax: handle media errors in dax_do_io
Date: Tue, 26 Apr 2016 10:11:57 +1000	[thread overview]
Message-ID: <20160426001157.GE18496@dastard> (raw)
In-Reply-To: <CAPcyv4i6iwm1iY2mQ5yRbYfRexQroUX_R0B-db4ROU837fratw@mail.gmail.com>

On Mon, Apr 25, 2016 at 04:43:14PM -0700, Dan Williams wrote:
> On Mon, Apr 25, 2016 at 4:25 PM, Dave Chinner <david@fromorbit.com> wrote:
> > On Mon, Apr 25, 2016 at 05:14:36PM +0000, Verma, Vishal L wrote:
> >> On Mon, 2016-04-25 at 01:31 -0700, hch@infradead.org wrote:
> >> > On Sat, Apr 23, 2016 at 06:08:37PM +0000, Verma, Vishal L wrote:
> >> > >
> >> > > direct_IO might fail with -EINVAL due to misalignment, or -ENOMEM
> >> > > due
> >> > > to some allocation failing, and I thought we should return the
> >> > > original
> >> > > -EIO in such cases so that the application doesn't lose the
> >> > > information
> >> > > that the bad block is actually causing the error.
> >> > EINVAL is a concern here.  Not due to the right error reported, but
> >> > because it means your current scheme is fundamentally broken - we
> >> > need to support I/O at any alignment for DAX I/O, and not fail due to
> >> > alignbment concernes for a highly specific degraded case.
> >> >
> >> > I think this whole series need to go back to the drawing board as I
> >> > don't think it can actually rely on using direct I/O as the EIO
> >> > fallback.
> >> >
> >> Agreed that DAX I/O can happen with any size/alignment, but how else do
> >> we send an IO through the driver without alignment restrictions? Also,
> >> the granularity at which we store badblocks is 512B sectors, so it
> >> seems natural that to clear such a sector, you'd expect to send a write
> >> to the whole sector.
> >>
> >> The expected usage flow is:
> >>
> >> - Application hits EIO doing dax_IO or load/store io
> >>
> >> - It checks badblocks and discovers it's files have lost data
> >
> > Lots of hand-waving here. How does the application map a bad
> > "sector" to a file without scanning the entire filesystem to find
> > the owner of the bad sector?
> >
> >> - It write()s those sectors (possibly converted to file offsets using
> >> fiemap)
> >>     * This triggers the fallback path, but if the application is doing
> >> this level of recovery, it will know the sector is bad, and write the
> >> entire sector
> >
> > Where does the application find the data that was lost to be able to
> > rewrite it?
> >
> >> - Or it replaces the entire file from backup also using write() (not
> >> mmap+stores)
> >>     * This just frees the fs block, and the next time the block is
> >> reallocated by the fs, it will likely be zeroed first, and that will be
> >> done through the driver and will clear errors
> >
> > There's an implicit assumption that applications will keep redundant
> > copies of their data at the /application layer/ and be able to
> > automatically repair it? And then there's the implicit assumption
> > that it will unlink and free the entire file before writing a new
> > copy, and that then assumes the the filesystem will zero blocks if
> > they get reused to clear errors on that LBA sector mapping before
> > they are accessible again to userspace..
> >
> > It seems to me that there are a number of assumptions being made
> > across multiple layers here. Maybe I've missed something - can you
> > point me to the design/architecture description so I can see how
> > "app does data recovery itself" dance is supposed to work?
> >
> 
> Maybe I missed something, but all these assumptions are already
> present for typical block devices, i.e. sectors may go bad and a write
> may make the sector usable again.

The assumption we make about sectors going bad on SSDs or SRDs is
that the device is about to die and needs replacing ASAP. Then
RAID takes care of the rebuild completely transparently. i.e.
handling and correcting bad sectors is typically done completely
transparently /below/ the filesytem like so:

Application
Filesystem
block
[LBA mapping/redundancy/correction driver e.g. md/dm]
driver
hardware
[LBA redundancy/correction e.g h/w RAID]

In the case of filesystems with their own RAID/redundancy code (e.g.
btrfs), then it looks like this:

Application
Filesystem
mapping/redundancy/correction driver
block
driver
hardware
[LBA redundancy/correction e.g h/w RAID]

> This patch series is extending that
> out to the DAX-mmap case, but it's the same principle of "write to
> clear error" that we live with in the block-I/O path.  What
> clarification are you looking for beyond that point?

I'm asking for an actual design document that explains how moving
all the redundancy and bad sector correction stuff from the LBA
layer up into application space is supposed to work when
applications have no clue about LBA mappings, nor tend to keep
redundant data around. i.e. you're proposing this:

Application
Application data redundancy/correction
Filesystem
Block
[LBA mapping/redundancy/correction driver e.g. md/dm]
driver
hardware

And somehow all the error information from the hardware layer needs
to be propagated up to the application layer, along with all the
mapping information from the filesystem and block layers for the
application to make sense of the hardware reported errors.

I see assumptions this this "just works" but we don't have any of
the relevant APIs or infrastructure to enable the application to do
the hardware error->file+offset namespace mapping (i.e. filesystem
reverse mapping for for file offsets and directory paths, and
reverse mapping for the the block layer remapping drivers).

I haven't seen any design/documentation for infrastructure at the
application layer to handle redundant data and correctly
transparently so I don't have any idea what the technical
requirements this different IO stack places on filesystems may be.
Hence I'm asking for some kind of architecture/design documentation
that I can read to understand exactly what is being proposed here...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

WARNING: multiple messages have this Message-ID (diff)
From: Dave Chinner <david@fromorbit.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: "axboe@fb.com" <axboe@fb.com>, "jack@suse.cz" <jack@suse.cz>,
	"hch@infradead.org" <hch@infradead.org>,
	"Verma, Vishal L" <vishal.l.verma@intel.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xfs@oss.sgi.com" <xfs@oss.sgi.com>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	"Wilcox, Matthew R" <matthew.r.wilcox@intel.com>,
	"linux-nvdimm@ml01.01.org" <linux-nvdimm@ml01.01.org>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
	"linux-ext4@vger.kernel.org" <linux-ext4@vger.kernel.org>,
	"viro@zeniv.linux.org.uk" <viro@zeniv.linux.org.uk>
Subject: Re: [PATCH v2 5/5] dax: handle media errors in dax_do_io
Date: Tue, 26 Apr 2016 10:11:57 +1000	[thread overview]
Message-ID: <20160426001157.GE18496@dastard> (raw)
In-Reply-To: <CAPcyv4i6iwm1iY2mQ5yRbYfRexQroUX_R0B-db4ROU837fratw@mail.gmail.com>

On Mon, Apr 25, 2016 at 04:43:14PM -0700, Dan Williams wrote:
> On Mon, Apr 25, 2016 at 4:25 PM, Dave Chinner <david@fromorbit.com> wrote:
> > On Mon, Apr 25, 2016 at 05:14:36PM +0000, Verma, Vishal L wrote:
> >> On Mon, 2016-04-25 at 01:31 -0700, hch@infradead.org wrote:
> >> > On Sat, Apr 23, 2016 at 06:08:37PM +0000, Verma, Vishal L wrote:
> >> > >
> >> > > direct_IO might fail with -EINVAL due to misalignment, or -ENOMEM
> >> > > due
> >> > > to some allocation failing, and I thought we should return the
> >> > > original
> >> > > -EIO in such cases so that the application doesn't lose the
> >> > > information
> >> > > that the bad block is actually causing the error.
> >> > EINVAL is a concern here.  Not due to the right error reported, but
> >> > because it means your current scheme is fundamentally broken - we
> >> > need to support I/O at any alignment for DAX I/O, and not fail due to
> >> > alignbment concernes for a highly specific degraded case.
> >> >
> >> > I think this whole series need to go back to the drawing board as I
> >> > don't think it can actually rely on using direct I/O as the EIO
> >> > fallback.
> >> >
> >> Agreed that DAX I/O can happen with any size/alignment, but how else do
> >> we send an IO through the driver without alignment restrictions? Also,
> >> the granularity at which we store badblocks is 512B sectors, so it
> >> seems natural that to clear such a sector, you'd expect to send a write
> >> to the whole sector.
> >>
> >> The expected usage flow is:
> >>
> >> - Application hits EIO doing dax_IO or load/store io
> >>
> >> - It checks badblocks and discovers it's files have lost data
> >
> > Lots of hand-waving here. How does the application map a bad
> > "sector" to a file without scanning the entire filesystem to find
> > the owner of the bad sector?
> >
> >> - It write()s those sectors (possibly converted to file offsets using
> >> fiemap)
> >>     * This triggers the fallback path, but if the application is doing
> >> this level of recovery, it will know the sector is bad, and write the
> >> entire sector
> >
> > Where does the application find the data that was lost to be able to
> > rewrite it?
> >
> >> - Or it replaces the entire file from backup also using write() (not
> >> mmap+stores)
> >>     * This just frees the fs block, and the next time the block is
> >> reallocated by the fs, it will likely be zeroed first, and that will be
> >> done through the driver and will clear errors
> >
> > There's an implicit assumption that applications will keep redundant
> > copies of their data at the /application layer/ and be able to
> > automatically repair it? And then there's the implicit assumption
> > that it will unlink and free the entire file before writing a new
> > copy, and that then assumes the the filesystem will zero blocks if
> > they get reused to clear errors on that LBA sector mapping before
> > they are accessible again to userspace..
> >
> > It seems to me that there are a number of assumptions being made
> > across multiple layers here. Maybe I've missed something - can you
> > point me to the design/architecture description so I can see how
> > "app does data recovery itself" dance is supposed to work?
> >
> 
> Maybe I missed something, but all these assumptions are already
> present for typical block devices, i.e. sectors may go bad and a write
> may make the sector usable again.

The assumption we make about sectors going bad on SSDs or SRDs is
that the device is about to die and needs replacing ASAP. Then
RAID takes care of the rebuild completely transparently. i.e.
handling and correcting bad sectors is typically done completely
transparently /below/ the filesytem like so:

Application
Filesystem
block
[LBA mapping/redundancy/correction driver e.g. md/dm]
driver
hardware
[LBA redundancy/correction e.g h/w RAID]

In the case of filesystems with their own RAID/redundancy code (e.g.
btrfs), then it looks like this:

Application
Filesystem
mapping/redundancy/correction driver
block
driver
hardware
[LBA redundancy/correction e.g h/w RAID]

> This patch series is extending that
> out to the DAX-mmap case, but it's the same principle of "write to
> clear error" that we live with in the block-I/O path.  What
> clarification are you looking for beyond that point?

I'm asking for an actual design document that explains how moving
all the redundancy and bad sector correction stuff from the LBA
layer up into application space is supposed to work when
applications have no clue about LBA mappings, nor tend to keep
redundant data around. i.e. you're proposing this:

Application
Application data redundancy/correction
Filesystem
Block
[LBA mapping/redundancy/correction driver e.g. md/dm]
driver
hardware

And somehow all the error information from the hardware layer needs
to be propagated up to the application layer, along with all the
mapping information from the filesystem and block layers for the
application to make sense of the hardware reported errors.

I see assumptions this this "just works" but we don't have any of
the relevant APIs or infrastructure to enable the application to do
the hardware error->file+offset namespace mapping (i.e. filesystem
reverse mapping for for file offsets and directory paths, and
reverse mapping for the the block layer remapping drivers).

I haven't seen any design/documentation for infrastructure at the
application layer to handle redundant data and correctly
transparently so I don't have any idea what the technical
requirements this different IO stack places on filesystems may be.
Hence I'm asking for some kind of architecture/design documentation
that I can read to understand exactly what is being proposed here...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2016-04-26  0:13 UTC|newest]

Thread overview: 216+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-03-30  1:59 [PATCH v2 0/5] dax: handling of media errors Vishal Verma
2016-03-30  1:59 ` Vishal Verma
2016-03-30  1:59 ` Vishal Verma
2016-03-30  1:59 ` Vishal Verma
2016-03-30  1:59 ` [PATCH v2 1/5] block, dax: pass blk_dax_ctl through to drivers Vishal Verma
2016-03-30  1:59   ` Vishal Verma
2016-03-30  1:59   ` Vishal Verma
2016-03-30  1:59   ` Vishal Verma
2016-03-30  4:19   ` kbuild test robot
2016-03-30  4:19     ` kbuild test robot
2016-03-30  4:19     ` kbuild test robot
2016-03-30  4:19     ` kbuild test robot
2016-04-15 14:55   ` Jeff Moyer
2016-04-15 14:55     ` Jeff Moyer
2016-04-15 14:55     ` Jeff Moyer
2016-03-30  1:59 ` [PATCH v2 2/5] dax: fallback from pmd to pte on error Vishal Verma
2016-03-30  1:59   ` Vishal Verma
2016-03-30  1:59   ` Vishal Verma
2016-03-30  1:59   ` Vishal Verma
2016-03-30  1:59   ` Vishal Verma
2016-04-15 14:55   ` Jeff Moyer
2016-04-15 14:55     ` Jeff Moyer
2016-04-15 14:55     ` Jeff Moyer
2016-03-30  1:59 ` [PATCH v2 3/5] dax: enable dax in the presence of known media errors (badblocks) Vishal Verma
2016-03-30  1:59   ` Vishal Verma
2016-03-30  1:59   ` Vishal Verma
2016-03-30  1:59   ` Vishal Verma
2016-04-15 14:56   ` Jeff Moyer
2016-04-15 14:56     ` Jeff Moyer
2016-04-15 14:56     ` Jeff Moyer
2016-03-30  1:59 ` [PATCH v2 4/5] dax: use sb_issue_zerout instead of calling dax_clear_sectors Vishal Verma
2016-03-30  1:59   ` Vishal Verma
2016-03-30  1:59   ` Vishal Verma
2016-03-30  1:59   ` Vishal Verma
2016-04-15 15:18   ` Jeff Moyer
2016-04-15 15:18     ` Jeff Moyer
2016-04-15 15:18     ` Jeff Moyer
2016-03-30  1:59 ` [PATCH v2 5/5] dax: handle media errors in dax_do_io Vishal Verma
2016-03-30  1:59   ` Vishal Verma
2016-03-30  1:59   ` Vishal Verma
2016-03-30  1:59   ` Vishal Verma
2016-03-30  3:00   ` kbuild test robot
2016-03-30  3:00     ` kbuild test robot
2016-03-30  3:00     ` kbuild test robot
2016-03-30  3:00     ` kbuild test robot
2016-03-30  3:00     ` kbuild test robot
2016-03-30  3:00     ` kbuild test robot
2016-03-30  6:34   ` Christoph Hellwig
2016-03-30  6:34     ` Christoph Hellwig
2016-03-30  6:34     ` Christoph Hellwig
2016-03-30  6:54     ` Vishal Verma
2016-03-30  6:54       ` Vishal Verma
2016-03-30  6:54       ` Vishal Verma
2016-03-30  6:56       ` Christoph Hellwig
2016-03-30  6:56         ` Christoph Hellwig
2016-03-30  6:56         ` Christoph Hellwig
2016-04-15 16:11   ` Jeff Moyer
2016-04-15 16:11     ` Jeff Moyer
2016-04-15 16:11     ` Jeff Moyer
2016-04-15 16:54     ` Verma, Vishal L
2016-04-15 16:54       ` Verma, Vishal L
2016-04-15 16:54       ` Verma, Vishal L
2016-04-15 17:11       ` Jeff Moyer
2016-04-15 17:11         ` Jeff Moyer
2016-04-15 17:11         ` Jeff Moyer
2016-04-15 17:11         ` Jeff Moyer
2016-04-15 17:37         ` Verma, Vishal L
2016-04-15 17:37           ` Verma, Vishal L
2016-04-15 17:57           ` Dan Williams
2016-04-15 17:57             ` Dan Williams
2016-04-15 17:57             ` Dan Williams
2016-04-15 18:06             ` Jeff Moyer
2016-04-15 18:06               ` Jeff Moyer
2016-04-15 18:06               ` Jeff Moyer
2016-04-15 18:06               ` Jeff Moyer
2016-04-15 18:17               ` Dan Williams
2016-04-15 18:17                 ` Dan Williams
2016-04-15 18:17                 ` Dan Williams
2016-04-15 18:24                 ` Jeff Moyer
2016-04-15 18:24                   ` Jeff Moyer
2016-04-15 18:24                   ` Jeff Moyer
2016-04-15 18:24                   ` Jeff Moyer
2016-04-15 18:56                   ` Dan Williams
2016-04-15 18:56                     ` Dan Williams
2016-04-15 18:56                     ` Dan Williams
2016-04-15 19:13                     ` Jeff Moyer
2016-04-15 19:13                       ` Jeff Moyer
2016-04-15 19:13                       ` Jeff Moyer
2016-04-15 19:13                       ` Jeff Moyer
2016-04-15 19:01                 ` Toshi Kani
2016-04-15 19:01                   ` Toshi Kani
2016-04-15 19:01                   ` Toshi Kani
2016-04-15 19:01                   ` Toshi Kani
2016-04-15 19:08                   ` Toshi Kani
2016-04-15 19:08                     ` Toshi Kani
2016-04-15 19:08                     ` Toshi Kani
2016-04-15 19:08                     ` Toshi Kani
2016-04-20 20:59     ` Christoph Hellwig
2016-04-20 20:59       ` Christoph Hellwig
2016-04-20 20:59       ` Christoph Hellwig
2016-04-23 18:08       ` Verma, Vishal L
2016-04-23 18:08         ` Verma, Vishal L
2016-04-25  8:31         ` hch
2016-04-25  8:31           ` hch
2016-04-25  8:31           ` hch
2016-04-25 15:32           ` Jeff Moyer
2016-04-25 15:32             ` Jeff Moyer
2016-04-25 15:32             ` Jeff Moyer
2016-04-25 15:32             ` Jeff Moyer
2016-04-26  8:32             ` hch
2016-04-26  8:32               ` hch
2016-04-26  8:32               ` hch
2016-04-25 17:14           ` Verma, Vishal L
2016-04-25 17:14             ` Verma, Vishal L
2016-04-25 17:21             ` Dan Williams
2016-04-25 17:21               ` Dan Williams
2016-04-25 17:21               ` Dan Williams
2016-04-25 23:25             ` Dave Chinner
2016-04-25 23:25               ` Dave Chinner
2016-04-25 23:25               ` Dave Chinner
2016-04-25 23:25               ` Dave Chinner
2016-04-25 23:25               ` Dave Chinner
2016-04-25 23:34               ` Darrick J. Wong
2016-04-25 23:34                 ` Darrick J. Wong
2016-04-25 23:34                 ` Darrick J. Wong
2016-04-25 23:34                 ` Darrick J. Wong
2016-04-25 23:34                 ` Darrick J. Wong
2016-04-25 23:43               ` Dan Williams
2016-04-25 23:43                 ` Dan Williams
2016-04-25 23:43                 ` Dan Williams
2016-04-26  0:11                 ` Dave Chinner [this message]
2016-04-26  0:11                   ` Dave Chinner
2016-04-26  0:11                   ` Dave Chinner
2016-04-26  1:45                   ` Dan Williams
2016-04-26  1:45                     ` Dan Williams
2016-04-26  1:45                     ` Dan Williams
2016-04-26  2:56                     ` Dave Chinner
2016-04-26  2:56                       ` Dave Chinner
2016-04-26  2:56                       ` Dave Chinner
2016-04-26  4:18                       ` Dan Williams
2016-04-26  4:18                         ` Dan Williams
2016-04-26  4:18                         ` Dan Williams
2016-04-26  8:27                         ` Dave Chinner
2016-04-26  8:27                           ` Dave Chinner
2016-04-26  8:27                           ` Dave Chinner
2016-04-26 14:59                           ` Dan Williams
2016-04-26 14:59                             ` Dan Williams
2016-04-26 14:59                             ` Dan Williams
2016-04-26 15:31                             ` Jan Kara
2016-04-26 15:31                               ` Jan Kara
2016-04-26 15:31                               ` Jan Kara
2016-04-26 17:16                               ` Dan Williams
2016-04-26 17:16                                 ` Dan Williams
2016-04-26 17:16                                 ` Dan Williams
2016-04-25 23:53               ` Verma, Vishal L
2016-04-25 23:53                 ` Verma, Vishal L
2016-04-25 23:53                 ` Verma, Vishal L
2016-04-26  0:41                 ` Dave Chinner
2016-04-26  0:41                   ` Dave Chinner
2016-04-26  0:41                   ` Dave Chinner
2016-04-26  0:41                   ` Dave Chinner
2016-04-26  0:41                   ` Dave Chinner
2016-04-26 14:58                   ` Vishal Verma
2016-04-26 14:58                     ` Vishal Verma
2016-04-26 14:58                     ` Vishal Verma
2016-04-26 14:58                     ` Vishal Verma
2016-05-02 15:18                   ` Jeff Moyer
2016-05-02 15:18                     ` Jeff Moyer
2016-05-02 15:18                     ` Jeff Moyer
2016-05-02 15:18                     ` Jeff Moyer
2016-05-02 17:53                     ` Dan Williams
2016-05-02 17:53                       ` Dan Williams
2016-05-02 17:53                       ` Dan Williams
2016-05-03  0:42                       ` Dave Chinner
2016-05-03  0:42                         ` Dave Chinner
2016-05-03  0:42                         ` Dave Chinner
2016-05-03  1:26                         ` Rudoff, Andy
2016-05-03  1:26                           ` Rudoff, Andy
2016-05-03  2:49                           ` Dave Chinner
2016-05-03  2:49                             ` Dave Chinner
2016-05-03  2:49                             ` Dave Chinner
2016-05-03 18:30                             ` Rudoff, Andy
2016-05-03 18:30                               ` Rudoff, Andy
2016-05-03 18:30                               ` Rudoff, Andy
2016-05-04  1:36                               ` Dave Chinner
2016-05-04  1:36                                 ` Dave Chinner
2016-05-04  1:36                                 ` Dave Chinner
2016-05-02 23:04                     ` Dave Chinner
2016-05-02 23:04                       ` Dave Chinner
2016-05-02 23:04                       ` Dave Chinner
2016-05-02 23:04                       ` Dave Chinner
2016-05-02 23:04                       ` Dave Chinner
2016-05-02 23:17                       ` Verma, Vishal L
2016-05-02 23:17                         ` Verma, Vishal L
2016-05-02 23:25                       ` Dan Williams
2016-05-02 23:25                         ` Dan Williams
2016-05-02 23:25                         ` Dan Williams
2016-05-03  1:51                         ` Dave Chinner
2016-05-03  1:51                           ` Dave Chinner
2016-05-03  1:51                           ` Dave Chinner
2016-05-03 17:28                           ` Dan Williams
2016-05-03 17:28                             ` Dan Williams
2016-05-03 17:28                             ` Dan Williams
2016-05-04  3:18                             ` Dave Chinner
2016-05-04  3:18                               ` Dave Chinner
2016-05-04  3:18                               ` Dave Chinner
2016-05-04  5:05                               ` Dan Williams
2016-05-04  5:05                                 ` Dan Williams
2016-05-04  5:05                                 ` Dan Williams
2016-04-26  8:33             ` hch
2016-04-26  8:33               ` hch
2016-04-26  8:33               ` hch
2016-04-26 15:01               ` Vishal Verma
2016-04-26 15:01                 ` Vishal Verma
2016-04-26 15:01                 ` Vishal Verma
2016-04-26 15:01                 ` Vishal Verma

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160426001157.GE18496@dastard \
    --to=david@fromorbit.com \
    --cc=akpm@linux-foundation.org \
    --cc=axboe@fb.com \
    --cc=dan.j.williams@intel.com \
    --cc=hch@infradead.org \
    --cc=jack@suse.cz \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-ext4@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-nvdimm@ml01.01.org \
    --cc=matthew.r.wilcox@intel.com \
    --cc=viro@zeniv.linux.org.uk \
    --cc=vishal.l.verma@intel.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.