All of lore.kernel.org
 help / color / mirror / Atom feed
From: Vishal Verma <vishal.l.verma@intel.com>
To: Lu Zhang <luzh@eng.ucsd.edu>
Cc: Andreas Dilger <adilger@dilger.ca>,
	Slava Dubeyko <Vyacheslav.Dubeyko@wdc.com>,
	"Darrick J. Wong" <darrick.wong@oracle.com>,
	"linux-nvdimm@lists.01.org" <linux-nvdimm@ml01.01.org>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	Linux FS Devel <linux-fsdevel@vger.kernel.org>,
	Viacheslav Dubeyko <slava@dubeyko.com>,
	Andiry Xu <andiry@gmail.com>,
	"lsf-pc@lists.linux-foundation.org"
	<lsf-pc@lists.linux-foundation.org>
Subject: Re: [LSF/MM TOPIC] Badblocks checking/representation in filesystems
Date: Thu, 19 Jan 2017 17:46:33 -0700	[thread overview]
Message-ID: <20170120004633.GA14128@omniknight.lm.intel.com> (raw)
In-Reply-To: <CAL4pJv6MvJhTuPJbPAV-HXGrMST-dJs461O=wwfcpdvQA-amdA@mail.gmail.com>

On 01/17, Lu Zhang wrote:
> I'm curious about the fault model and corresponding hardware ECC mechanisms
> for NVDIMMs. In my understanding for memory accesses to trigger MCE, it
> means the memory controller finds a detectable but uncorrectable error
> (DUE). So if there is no hardware ECC support the media errors won't even
> be noticed, not to mention badblocks or machine checks.
> 
> Current hardware ECC support for DRAM usually employs (72, 64) single-bit
> error correction mechanism, and for advanced ECCs there are techniques like
> Chipkill or SDDC which can tolerate a single DRAM chip failure. What is the
> expected ECC mode for NVDIMMs, assuming that PCM or 3dXpoint based
> technology might have higher error rates?

I'm sure once NVDIMMs start becoming widely available, there will be
more information on how they do ECC..

> 
> If DUE does happen and is flagged to the file system via MCE (somehow...),
> and the fs finds that the error corrupts its allocated data page, or
> metadata, now if the fs wants to recover its data the intuition is that
> there needs to be a stronger error correction mechanism to correct the
> hardware-uncorrectable errors. So knowing the hardware ECC baseline is
> helpful for the file system to understand how severe are the faults in
> badblocks, and develop its recovery methods.

Like mentioned before, this discussion is more about presentation of
errors in a known consumable format, rather than recovering from errors.
While recovering from errors is interesting, we already have layers
like RAID for that, and they are as applicable to NVDIMM backed storage
as they have been for disk/SSD based storage.

> 
> Regards,
> Lu
> 
> On Tue, Jan 17, 2017 at 6:01 PM, Andiry Xu <andiry@gmail.com> wrote:
> 
> > On Tue, Jan 17, 2017 at 4:16 PM, Andreas Dilger <adilger@dilger.ca> wrote:
> > > On Jan 17, 2017, at 3:15 PM, Andiry Xu <andiry@gmail.com> wrote:
> > >> On Tue, Jan 17, 2017 at 1:35 PM, Vishal Verma <vishal.l.verma@intel.com>
> > wrote:
> > >>> On 01/16, Darrick J. Wong wrote:
> > >>>> On Fri, Jan 13, 2017 at 05:49:10PM -0700, Vishal Verma wrote:
> > >>>>> On 01/14, Slava Dubeyko wrote:
> > >>>>>>
> > >>>>>> ---- Original Message ----
> > >>>>>> Subject: [LSF/MM TOPIC] Badblocks checking/representation in
> > filesystems
> > >>>>>> Sent: Jan 13, 2017 1:40 PM
> > >>>>>> From: "Verma, Vishal L" <vishal.l.verma@intel.com>
> > >>>>>> To: lsf-pc@lists.linux-foundation.org
> > >>>>>> Cc: linux-nvdimm@lists.01.org, linux-block@vger.kernel.org,
> > linux-fsdevel@vger.kernel.org
> > >>>>>>
> > >>>>>>> The current implementation of badblocks, where we consult the
> > >>>>>>> badblocks list for every IO in the block driver works, and is a
> > >>>>>>> last option failsafe, but from a user perspective, it isn't the
> > >>>>>>> easiest interface to work with.
> > >>>>>>
> > >>>>>> As I remember, FAT and HFS+ specifications contain description of
> > bad blocks
> > >>>>>> (physical sectors) table. I believe that this table was used for
> > the case of
> > >>>>>> floppy media. But, finally, this table becomes to be the completely
> > obsolete
> > >>>>>> artefact because mostly storage devices are reliably enough. Why do
> > you need
> > >>>>
> > >>>> ext4 has a badblocks inode to own all the bad spots on disk, but ISTR
> > it
> > >>>> doesn't support(??) extents or 64-bit filesystems, and might just be a
> > >>>> vestigial organ at this point.  XFS doesn't have anything to track bad
> > >>>> blocks currently....
> > >>>>
> > >>>>>> in exposing the bad blocks on the file system level?  Do you expect
> > that next
> > >>>>>> generation of NVM memory will be so unreliable that file system
> > needs to manage
> > >>>>>> bad blocks? What's about erasure coding schemes? Do file system
> > really need to suffer
> > >>>>>> from the bad block issue?
> > >>>>>>
> > >>>>>> Usually, we are using LBAs and it is the responsibility of storage
> > device to map
> > >>>>>> a bad physical block/page/sector into valid one. Do you mean that
> > we have
> > >>>>>> access to physical NVM memory address directly? But it looks like
> > that we can
> > >>>>>> have a "bad block" issue even we will access data into page cache's
> > memory
> > >>>>>> page (if we will use NVM memory for page cache, of course). So,
> > what do you
> > >>>>>> imply by "bad block" issue?
> > >>>>>
> > >>>>> We don't have direct physical access to the device's address space,
> > in
> > >>>>> the sense the device is still free to perform remapping of chunks of
> > NVM
> > >>>>> underneath us. The problem is that when a block or address range (as
> > >>>>> small as a cache line) goes bad, the device maintains a poison bit
> > for
> > >>>>> every affected cache line. Behind the scenes, it may have already
> > >>>>> remapped the range, but the cache line poison has to be kept so that
> > >>>>> there is a notification to the user/owner of the data that something
> > has
> > >>>>> been lost. Since NVM is byte addressable memory sitting on the memory
> > >>>>> bus, such a poisoned cache line results in memory errors and
> > SIGBUSes.
> > >>>>> Compared to tradational storage where an app will get nice and
> > friendly
> > >>>>> (relatively speaking..) -EIOs. The whole badblocks implementation was
> > >>>>> done so that the driver can intercept IO (i.e. reads) to _known_ bad
> > >>>>> locations, and short-circuit them with an EIO. If the driver doesn't
> > >>>>> catch these, the reads will turn into a memory bus access, and the
> > >>>>> poison will cause a SIGBUS.
> > >>>>
> > >>>> "driver" ... you mean XFS?  Or do you mean the thing that makes pmem
> > >>>> look kind of like a traditional block device? :)
> > >>>
> > >>> Yes, the thing that makes pmem look like a block device :) --
> > >>> drivers/nvdimm/pmem.c
> > >>>
> > >>>>
> > >>>>> This effort is to try and make this badblock checking smarter - and
> > try
> > >>>>> and reduce the penalty on every IO to a smaller range, which only the
> > >>>>> filesystem can do.
> > >>>>
> > >>>> Though... now that XFS merged the reverse mapping support, I've been
> > >>>> wondering if there'll be a resubmission of the device errors callback?
> > >>>> It still would be useful to be able to inform the user that part of
> > >>>> their fs has gone bad, or, better yet, if the buffer is still in
> > memory
> > >>>> someplace else, just write it back out.
> > >>>>
> > >>>> Or I suppose if we had some kind of raid1 set up between memories we
> > >>>> could read one of the other copies and rewrite it into the failing
> > >>>> region immediately.
> > >>>
> > >>> Yes, that is kind of what I was hoping to accomplish via this
> > >>> discussion. How much would filesystems want to be involved in this sort
> > >>> of badblocks handling, if at all. I can refresh my patches that provide
> > >>> the fs notification, but that's the easy bit, and a starting point.
> > >>>
> > >>
> > >> I have some questions. Why moving badblock handling to file system
> > >> level avoid the checking phase? In file system level for each I/O I
> > >> still have to check the badblock list, right? Do you mean during mount
> > >> it can go through the pmem device and locates all the data structures
> > >> mangled by badblocks and handle them accordingly, so that during
> > >> normal running the badblocks will never be accessed? Or, if there is
> > >> replicataion/snapshot support, use a copy to recover the badblocks?
> > >
> > > With ext4 badblocks, the main outcome is that the bad blocks would be
> > > pemanently marked in the allocation bitmap as being used, and they would
> > > never be allocated to a file, so they should never be accessed unless
> > > doing a full device scan (which ext4 and e2fsck never do).  That would
> > > avoid the need to check every I/O against the bad blocks list, if the
> > > driver knows that the filesystem will handle this.
> > >
> >
> > Thank you for explanation. However this only works for free blocks,
> > right? What about allocated blocks, like file data and metadata?
> >
> > Thanks,
> > Andiry
> >
> > > The one caveat is that ext4 only allows 32-bit block numbers in the
> > > badblocks list, since this feature hasn't been used in a long time.
> > > This is good for up to 16TB filesystems, but if there was a demand to
> > > use this feature again it would be possible allow 64-bit block numbers.
> > >
> > > Cheers, Andreas
> > >
> > >
> > >
> > >
> > >
> > _______________________________________________
> > Linux-nvdimm mailing list
> > Linux-nvdimm@lists.01.org
> > https://lists.01.org/mailman/listinfo/linux-nvdimm
> >
> _______________________________________________
> Linux-nvdimm mailing list
> Linux-nvdimm@lists.01.org
> https://lists.01.org/mailman/listinfo/linux-nvdimm
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

WARNING: multiple messages have this Message-ID (diff)
From: Vishal Verma <vishal.l.verma@intel.com>
To: Lu Zhang <luzh@eng.ucsd.edu>
Cc: Andiry Xu <andiry@gmail.com>, Andreas Dilger <adilger@dilger.ca>,
	Slava Dubeyko <Vyacheslav.Dubeyko@wdc.com>,
	"Darrick J. Wong" <darrick.wong@oracle.com>,
	"linux-nvdimm@lists.01.org" <linux-nvdimm@ml01.01.org>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	Viacheslav Dubeyko <slava@dubeyko.com>,
	Linux FS Devel <linux-fsdevel@vger.kernel.org>,
	"lsf-pc@lists.linux-foundation.org"
	<lsf-pc@lists.linux-foundation.org>
Subject: Re: [LSF/MM TOPIC] Badblocks checking/representation in filesystems
Date: Thu, 19 Jan 2017 17:46:33 -0700	[thread overview]
Message-ID: <20170120004633.GA14128@omniknight.lm.intel.com> (raw)
In-Reply-To: <CAL4pJv6MvJhTuPJbPAV-HXGrMST-dJs461O=wwfcpdvQA-amdA@mail.gmail.com>

On 01/17, Lu Zhang wrote:
> I'm curious about the fault model and corresponding hardware ECC mechanisms
> for NVDIMMs. In my understanding for memory accesses to trigger MCE, it
> means the memory controller finds a detectable but uncorrectable error
> (DUE). So if there is no hardware ECC support the media errors won't even
> be noticed, not to mention badblocks or machine checks.
> 
> Current hardware ECC support for DRAM usually employs (72, 64) single-bit
> error correction mechanism, and for advanced ECCs there are techniques like
> Chipkill or SDDC which can tolerate a single DRAM chip failure. What is the
> expected ECC mode for NVDIMMs, assuming that PCM or 3dXpoint based
> technology might have higher error rates?

I'm sure once NVDIMMs start becoming widely available, there will be
more information on how they do ECC..

> 
> If DUE does happen and is flagged to the file system via MCE (somehow...),
> and the fs finds that the error corrupts its allocated data page, or
> metadata, now if the fs wants to recover its data the intuition is that
> there needs to be a stronger error correction mechanism to correct the
> hardware-uncorrectable errors. So knowing the hardware ECC baseline is
> helpful for the file system to understand how severe are the faults in
> badblocks, and develop its recovery methods.

Like mentioned before, this discussion is more about presentation of
errors in a known consumable format, rather than recovering from errors.
While recovering from errors is interesting, we already have layers
like RAID for that, and they are as applicable to NVDIMM backed storage
as they have been for disk/SSD based storage.

> 
> Regards,
> Lu
> 
> On Tue, Jan 17, 2017 at 6:01 PM, Andiry Xu <andiry@gmail.com> wrote:
> 
> > On Tue, Jan 17, 2017 at 4:16 PM, Andreas Dilger <adilger@dilger.ca> wrote:
> > > On Jan 17, 2017, at 3:15 PM, Andiry Xu <andiry@gmail.com> wrote:
> > >> On Tue, Jan 17, 2017 at 1:35 PM, Vishal Verma <vishal.l.verma@intel.com>
> > wrote:
> > >>> On 01/16, Darrick J. Wong wrote:
> > >>>> On Fri, Jan 13, 2017 at 05:49:10PM -0700, Vishal Verma wrote:
> > >>>>> On 01/14, Slava Dubeyko wrote:
> > >>>>>>
> > >>>>>> ---- Original Message ----
> > >>>>>> Subject: [LSF/MM TOPIC] Badblocks checking/representation in
> > filesystems
> > >>>>>> Sent: Jan 13, 2017 1:40 PM
> > >>>>>> From: "Verma, Vishal L" <vishal.l.verma@intel.com>
> > >>>>>> To: lsf-pc@lists.linux-foundation.org
> > >>>>>> Cc: linux-nvdimm@lists.01.org, linux-block@vger.kernel.org,
> > linux-fsdevel@vger.kernel.org
> > >>>>>>
> > >>>>>>> The current implementation of badblocks, where we consult the
> > >>>>>>> badblocks list for every IO in the block driver works, and is a
> > >>>>>>> last option failsafe, but from a user perspective, it isn't the
> > >>>>>>> easiest interface to work with.
> > >>>>>>
> > >>>>>> As I remember, FAT and HFS+ specifications contain description of
> > bad blocks
> > >>>>>> (physical sectors) table. I believe that this table was used for
> > the case of
> > >>>>>> floppy media. But, finally, this table becomes to be the completely
> > obsolete
> > >>>>>> artefact because mostly storage devices are reliably enough. Why do
> > you need
> > >>>>
> > >>>> ext4 has a badblocks inode to own all the bad spots on disk, but ISTR
> > it
> > >>>> doesn't support(??) extents or 64-bit filesystems, and might just be a
> > >>>> vestigial organ at this point.  XFS doesn't have anything to track bad
> > >>>> blocks currently....
> > >>>>
> > >>>>>> in exposing the bad blocks on the file system level?  Do you expect
> > that next
> > >>>>>> generation of NVM memory will be so unreliable that file system
> > needs to manage
> > >>>>>> bad blocks? What's about erasure coding schemes? Do file system
> > really need to suffer
> > >>>>>> from the bad block issue?
> > >>>>>>
> > >>>>>> Usually, we are using LBAs and it is the responsibility of storage
> > device to map
> > >>>>>> a bad physical block/page/sector into valid one. Do you mean that
> > we have
> > >>>>>> access to physical NVM memory address directly? But it looks like
> > that we can
> > >>>>>> have a "bad block" issue even we will access data into page cache's
> > memory
> > >>>>>> page (if we will use NVM memory for page cache, of course). So,
> > what do you
> > >>>>>> imply by "bad block" issue?
> > >>>>>
> > >>>>> We don't have direct physical access to the device's address space,
> > in
> > >>>>> the sense the device is still free to perform remapping of chunks of
> > NVM
> > >>>>> underneath us. The problem is that when a block or address range (as
> > >>>>> small as a cache line) goes bad, the device maintains a poison bit
> > for
> > >>>>> every affected cache line. Behind the scenes, it may have already
> > >>>>> remapped the range, but the cache line poison has to be kept so that
> > >>>>> there is a notification to the user/owner of the data that something
> > has
> > >>>>> been lost. Since NVM is byte addressable memory sitting on the memory
> > >>>>> bus, such a poisoned cache line results in memory errors and
> > SIGBUSes.
> > >>>>> Compared to tradational storage where an app will get nice and
> > friendly
> > >>>>> (relatively speaking..) -EIOs. The whole badblocks implementation was
> > >>>>> done so that the driver can intercept IO (i.e. reads) to _known_ bad
> > >>>>> locations, and short-circuit them with an EIO. If the driver doesn't
> > >>>>> catch these, the reads will turn into a memory bus access, and the
> > >>>>> poison will cause a SIGBUS.
> > >>>>
> > >>>> "driver" ... you mean XFS?  Or do you mean the thing that makes pmem
> > >>>> look kind of like a traditional block device? :)
> > >>>
> > >>> Yes, the thing that makes pmem look like a block device :) --
> > >>> drivers/nvdimm/pmem.c
> > >>>
> > >>>>
> > >>>>> This effort is to try and make this badblock checking smarter - and
> > try
> > >>>>> and reduce the penalty on every IO to a smaller range, which only the
> > >>>>> filesystem can do.
> > >>>>
> > >>>> Though... now that XFS merged the reverse mapping support, I've been
> > >>>> wondering if there'll be a resubmission of the device errors callback?
> > >>>> It still would be useful to be able to inform the user that part of
> > >>>> their fs has gone bad, or, better yet, if the buffer is still in
> > memory
> > >>>> someplace else, just write it back out.
> > >>>>
> > >>>> Or I suppose if we had some kind of raid1 set up between memories we
> > >>>> could read one of the other copies and rewrite it into the failing
> > >>>> region immediately.
> > >>>
> > >>> Yes, that is kind of what I was hoping to accomplish via this
> > >>> discussion. How much would filesystems want to be involved in this sort
> > >>> of badblocks handling, if at all. I can refresh my patches that provide
> > >>> the fs notification, but that's the easy bit, and a starting point.
> > >>>
> > >>
> > >> I have some questions. Why moving badblock handling to file system
> > >> level avoid the checking phase? In file system level for each I/O I
> > >> still have to check the badblock list, right? Do you mean during mount
> > >> it can go through the pmem device and locates all the data structures
> > >> mangled by badblocks and handle them accordingly, so that during
> > >> normal running the badblocks will never be accessed? Or, if there is
> > >> replicataion/snapshot support, use a copy to recover the badblocks?
> > >
> > > With ext4 badblocks, the main outcome is that the bad blocks would be
> > > pemanently marked in the allocation bitmap as being used, and they would
> > > never be allocated to a file, so they should never be accessed unless
> > > doing a full device scan (which ext4 and e2fsck never do).  That would
> > > avoid the need to check every I/O against the bad blocks list, if the
> > > driver knows that the filesystem will handle this.
> > >
> >
> > Thank you for explanation. However this only works for free blocks,
> > right? What about allocated blocks, like file data and metadata?
> >
> > Thanks,
> > Andiry
> >
> > > The one caveat is that ext4 only allows 32-bit block numbers in the
> > > badblocks list, since this feature hasn't been used in a long time.
> > > This is good for up to 16TB filesystems, but if there was a demand to
> > > use this feature again it would be possible allow 64-bit block numbers.
> > >
> > > Cheers, Andreas
> > >
> > >
> > >
> > >
> > >
> > _______________________________________________
> > Linux-nvdimm mailing list
> > Linux-nvdimm@lists.01.org
> > https://lists.01.org/mailman/listinfo/linux-nvdimm
> >
> _______________________________________________
> Linux-nvdimm mailing list
> Linux-nvdimm@lists.01.org
> https://lists.01.org/mailman/listinfo/linux-nvdimm

  reply	other threads:[~2017-01-20  0:46 UTC|newest]

Thread overview: 89+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <at1mp6pou4lenesjdgh22k4p.1484345585589@email.android.com>
     [not found] ` <b9rbflutjt10mb4ofherta8j.1484345610771@email.android.com>
2017-01-14  0:00   ` [LSF/MM TOPIC] Badblocks checking/representation in filesystems Slava Dubeyko
2017-01-14  0:00     ` Slava Dubeyko
2017-01-14  0:00     ` Slava Dubeyko
2017-01-14  0:49     ` Vishal Verma
2017-01-14  0:49       ` Vishal Verma
2017-01-16  2:27       ` Slava Dubeyko
2017-01-16  2:27         ` Slava Dubeyko
2017-01-16  2:27         ` Slava Dubeyko
2017-01-17 14:37         ` [Lsf-pc] " Jan Kara
2017-01-17 14:37           ` Jan Kara
2017-01-17 15:08           ` Christoph Hellwig
2017-01-17 15:08             ` Christoph Hellwig
2017-01-17 22:14           ` Vishal Verma
2017-01-17 22:14             ` Vishal Verma
2017-01-18 10:16             ` Jan Kara
2017-01-18 10:16               ` Jan Kara
2017-01-18 20:39               ` Jeff Moyer
2017-01-18 20:39                 ` Jeff Moyer
2017-01-18 21:02                 ` Darrick J. Wong
2017-01-18 21:02                   ` Darrick J. Wong
2017-01-18 21:32                   ` Dan Williams
2017-01-18 21:32                     ` Dan Williams
     [not found]                     ` <CAPcyv4hd7bpCa7d9msX0Y8gLz7WsqXT3VExQwwLuAcsmMxVTPg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2017-01-18 21:56                       ` Verma, Vishal L
2017-01-18 21:56                         ` Verma, Vishal L
2017-01-18 21:56                         ` Verma, Vishal L
     [not found]                         ` <1484776549.4358.33.camel-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
2017-01-19  8:10                           ` Jan Kara
2017-01-19  8:10                             ` Jan Kara
     [not found]                             ` <20170119081011.GA2565-4I4JzKEfoa/jFM9bn6wA6Q@public.gmane.org>
2017-01-19 18:59                               ` Vishal Verma
2017-01-19 18:59                                 ` Vishal Verma
     [not found]                                 ` <20170119185910.GF4880-PxNA6LsHknajYZd8rzuJLNh3ngVCH38I@public.gmane.org>
2017-01-19 19:03                                   ` Dan Williams
2017-01-19 19:03                                     ` Dan Williams
     [not found]                                     ` <CAPcyv4jZz_iqLutd0gPEL3udqbFxvBH8CZY5oDgUjG5dGbC2gg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2017-01-20  9:03                                       ` Jan Kara
2017-01-20  9:03                                         ` Jan Kara
2017-01-17 23:15           ` Slava Dubeyko
2017-01-17 23:15             ` Slava Dubeyko
2017-01-17 23:15             ` Slava Dubeyko
2017-01-18 20:47             ` Jeff Moyer
2017-01-18 20:47               ` Jeff Moyer
2017-01-19  2:56               ` Slava Dubeyko
2017-01-19  2:56                 ` Slava Dubeyko
2017-01-19  2:56                 ` Slava Dubeyko
2017-01-19 19:33                 ` Jeff Moyer
2017-01-19 19:33                   ` Jeff Moyer
2017-01-17  6:33       ` Darrick J. Wong
2017-01-17  6:33         ` Darrick J. Wong
2017-01-17 21:35         ` Vishal Verma
2017-01-17 21:35           ` Vishal Verma
2017-01-17 22:15           ` Andiry Xu
2017-01-17 22:15             ` Andiry Xu
2017-01-17 22:37             ` Vishal Verma
2017-01-17 22:37               ` Vishal Verma
2017-01-17 23:20               ` Andiry Xu
2017-01-17 23:20                 ` Andiry Xu
2017-01-17 23:51                 ` Vishal Verma
2017-01-17 23:51                   ` Vishal Verma
2017-01-18  1:58                   ` Andiry Xu
2017-01-18  1:58                     ` Andiry Xu
     [not found]                     ` <CAOvWMLZCt39EDg-1uppVVUeRG40JvOo9sKLY2XMuynZdnc0W9w-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2017-01-20  0:32                       ` Verma, Vishal L
2017-01-20  0:32                         ` Verma, Vishal L
2017-01-20  0:32                         ` Verma, Vishal L
2017-01-18  9:38               ` [Lsf-pc] " Jan Kara
2017-01-18  9:38                 ` Jan Kara
2017-01-19 21:17                 ` Vishal Verma
2017-01-19 21:17                   ` Vishal Verma
2017-01-20  9:47                   ` Jan Kara
2017-01-20  9:47                     ` Jan Kara
2017-01-20 15:42                     ` Dan Williams
2017-01-20 15:42                       ` Dan Williams
2017-01-24  7:46                       ` Jan Kara
2017-01-24  7:46                         ` Jan Kara
2017-01-24 19:59                         ` Vishal Verma
2017-01-24 19:59                           ` Vishal Verma
2017-01-18  0:16             ` Andreas Dilger
2017-01-18  2:01               ` Andiry Xu
2017-01-18  2:01                 ` Andiry Xu
2017-01-18  3:08                 ` Lu Zhang
2017-01-18  3:08                   ` Lu Zhang
2017-01-20  0:46                   ` Vishal Verma [this message]
2017-01-20  0:46                     ` Vishal Verma
2017-01-20  9:24                     ` Yasunori Goto
2017-01-20  9:24                       ` Yasunori Goto
     [not found]                       ` <20170120182435.0E12.E1E9C6FF-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
2017-01-21  0:23                         ` Kani, Toshimitsu
2017-01-21  0:23                           ` Kani, Toshimitsu
2017-01-21  0:23                           ` Kani, Toshimitsu
2017-01-20  0:55                 ` Verma, Vishal L
2017-01-20  0:55                   ` Verma, Vishal L
2017-01-13 21:40 Verma, Vishal L
2017-01-13 21:40 ` Verma, Vishal L
2017-01-13 21:40 ` Verma, Vishal L

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170120004633.GA14128@omniknight.lm.intel.com \
    --to=vishal.l.verma@intel.com \
    --cc=Vyacheslav.Dubeyko@wdc.com \
    --cc=adilger@dilger.ca \
    --cc=andiry@gmail.com \
    --cc=darrick.wong@oracle.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-nvdimm@ml01.01.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=luzh@eng.ucsd.edu \
    --cc=slava@dubeyko.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.