All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dan Williams <dan.j.williams@intel.com>
To: Matthew Wilcox <mawilcox@microsoft.com>
Cc: Tony Luck <tony.luck@intel.com>, Jan Kara <jack@suse.cz>,
	Mike Snitzer <snitzer@redhat.com>,
	"linux-nvdimm@lists.01.org" <linux-nvdimm@lists.01.org>,
	"x86@kernel.org" <x86@kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Jens Axboe <axboe@fb.com>, Ingo Molnar <mingo@redhat.com>,
	Al Viro <viro@zeniv.linux.org.uk>,
	"H. Peter Anvin" <hpa@zytor.com>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	"dm-devel@redhat.com" <dm-devel@redhat.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Christoph Hellwig <hch@lst.de>
Subject: Re: [PATCH 00/13] dax, pmem: move cpu cache maintenance to libnvdimm
Date: Sun, 22 Jan 2017 23:10:04 -0800	[thread overview]
Message-ID: <CAPcyv4h83wk5SGVPY+8nUZ1e4Gq9fz61GE3BhAdzGi=HFAuDcg@mail.gmail.com> (raw)
In-Reply-To: <BY2PR21MB0036CA85562DDD21814C0B27CB720@BY2PR21MB0036.namprd21.prod.outlook.com>

On Sun, Jan 22, 2017 at 10:37 PM, Matthew Wilcox <mawilcox@microsoft.com> wrote:
> From: Christoph Hellwig [mailto:hch@lst.de]
>> On Sun, Jan 22, 2017 at 06:39:28PM +0000, Matthew Wilcox wrote:
>> > Two guests on the same physical machine (or a guest and a host) have access
>> > to the same set of physical addresses.  This might be an NV-DIMM, or it might
>> > just be DRAM (for the purposes of reducing guest overhead).  The network
>> > filesystem has been enhanced with a call to allow the client to ask the server
>> > "What is the physical address for this range of bytes in this file?"
>> >
>> > We don't want to use the guest pagecache here.  That's antithetical to the
>> > second usage, and it's inefficient for the first usage.
>>
>> And the answer is that you need a dax device for whatever memoery exposed
>> in this way, as it needs to show up in the memory map for example.
>
> Wow, DAX devices look painful and awful.  I certainly don't want to be exposing the memory fronted by my network filesystem to userspace to access.  That just seems like a world of pain and bad experiences.  Absolutely the filesystem (or perhaps better, the ACPI tables) need to mark that chunk of memory as reserved, but it's definitely not available for anyone to access without the filesystem being aware.
>
> Even if we let the filesystem create a DAX device that doesn't show up in /dev (for example), Dan's patches don't give us a way to go from a file on the filesystem to a set of dax_ops.  And it does need to be a per-file operation, eg to support a file on an XFS volume which might be on a RT device or a normal device.  That was why I leaned towards an address_space operation, but I'd be happy to see an inode_operation instead.

How about we solve the copy_from_user() abuse first before we hijack
this thread for some future feature that afaics has no patches posted
yet.

An incremental step towards disentangling filesystem-dax from
block_devices is a lookup mechanism to go from a block_device to a dax
object that holds dax_ops. When this brave new filesystem enabling
appears it can grow a mechanism to lookup, or mount on, the dax object
directly.

One idea is to just hang a pointer to this dax object off of
bdev_inode, set at bdev open() time.
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

WARNING: multiple messages have this Message-ID (diff)
From: Dan Williams <dan.j.williams@intel.com>
To: Matthew Wilcox <mawilcox@microsoft.com>
Cc: Christoph Hellwig <hch@lst.de>,
	"linux-nvdimm@lists.01.org" <linux-nvdimm@ml01.01.org>,
	Tony Luck <tony.luck@intel.com>, Jan Kara <jack@suse.cz>,
	Toshi Kani <toshi.kani@hpe.com>,
	Mike Snitzer <snitzer@redhat.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"x86@kernel.org" <x86@kernel.org>, Jeff Moyer <jmoyer@redhat.com>,
	Jens Axboe <axboe@fb.com>,
	"dm-devel@redhat.com" <dm-devel@redhat.com>,
	Ingo Molnar <mingo@redhat.com>, Al Viro <viro@zeniv.linux.org.uk>,
	"H. Peter Anvin" <hpa@zytor.com>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Ross Zwisler <ross.zwisler@linux.intel.com>
Subject: Re: [PATCH 00/13] dax, pmem: move cpu cache maintenance to libnvdimm
Date: Sun, 22 Jan 2017 23:10:04 -0800	[thread overview]
Message-ID: <CAPcyv4h83wk5SGVPY+8nUZ1e4Gq9fz61GE3BhAdzGi=HFAuDcg@mail.gmail.com> (raw)
In-Reply-To: <BY2PR21MB0036CA85562DDD21814C0B27CB720@BY2PR21MB0036.namprd21.prod.outlook.com>

On Sun, Jan 22, 2017 at 10:37 PM, Matthew Wilcox <mawilcox@microsoft.com> wrote:
> From: Christoph Hellwig [mailto:hch@lst.de]
>> On Sun, Jan 22, 2017 at 06:39:28PM +0000, Matthew Wilcox wrote:
>> > Two guests on the same physical machine (or a guest and a host) have access
>> > to the same set of physical addresses.  This might be an NV-DIMM, or it might
>> > just be DRAM (for the purposes of reducing guest overhead).  The network
>> > filesystem has been enhanced with a call to allow the client to ask the server
>> > "What is the physical address for this range of bytes in this file?"
>> >
>> > We don't want to use the guest pagecache here.  That's antithetical to the
>> > second usage, and it's inefficient for the first usage.
>>
>> And the answer is that you need a dax device for whatever memoery exposed
>> in this way, as it needs to show up in the memory map for example.
>
> Wow, DAX devices look painful and awful.  I certainly don't want to be exposing the memory fronted by my network filesystem to userspace to access.  That just seems like a world of pain and bad experiences.  Absolutely the filesystem (or perhaps better, the ACPI tables) need to mark that chunk of memory as reserved, but it's definitely not available for anyone to access without the filesystem being aware.
>
> Even if we let the filesystem create a DAX device that doesn't show up in /dev (for example), Dan's patches don't give us a way to go from a file on the filesystem to a set of dax_ops.  And it does need to be a per-file operation, eg to support a file on an XFS volume which might be on a RT device or a normal device.  That was why I leaned towards an address_space operation, but I'd be happy to see an inode_operation instead.

How about we solve the copy_from_user() abuse first before we hijack
this thread for some future feature that afaics has no patches posted
yet.

An incremental step towards disentangling filesystem-dax from
block_devices is a lookup mechanism to go from a block_device to a dax
object that holds dax_ops. When this brave new filesystem enabling
appears it can grow a mechanism to lookup, or mount on, the dax object
directly.

One idea is to just hang a pointer to this dax object off of
bdev_inode, set at bdev open() time.

WARNING: multiple messages have this Message-ID (diff)
From: Dan Williams <dan.j.williams@intel.com>
To: Matthew Wilcox <mawilcox@microsoft.com>
Cc: Christoph Hellwig <hch@lst.de>,
	"linux-nvdimm@lists.01.org" <linux-nvdimm@lists.01.org>,
	Tony Luck <tony.luck@intel.com>, Jan Kara <jack@suse.cz>,
	Toshi Kani <toshi.kani@hpe.com>,
	Mike Snitzer <snitzer@redhat.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"x86@kernel.org" <x86@kernel.org>, Jeff Moyer <jmoyer@redhat.com>,
	Jens Axboe <axboe@fb.com>,
	"dm-devel@redhat.com" <dm-devel@redhat.com>,
	Ingo Molnar <mingo@redhat.com>, Al Viro <viro@zeniv.linux.org.uk>,
	"H. Peter Anvin" <hpa@zytor.com>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Ross Zwisler <ross.zwisler@linux.intel.com>
Subject: Re: [PATCH 00/13] dax, pmem: move cpu cache maintenance to libnvdimm
Date: Sun, 22 Jan 2017 23:10:04 -0800	[thread overview]
Message-ID: <CAPcyv4h83wk5SGVPY+8nUZ1e4Gq9fz61GE3BhAdzGi=HFAuDcg@mail.gmail.com> (raw)
In-Reply-To: <BY2PR21MB0036CA85562DDD21814C0B27CB720@BY2PR21MB0036.namprd21.prod.outlook.com>

On Sun, Jan 22, 2017 at 10:37 PM, Matthew Wilcox <mawilcox@microsoft.com> wrote:
> From: Christoph Hellwig [mailto:hch@lst.de]
>> On Sun, Jan 22, 2017 at 06:39:28PM +0000, Matthew Wilcox wrote:
>> > Two guests on the same physical machine (or a guest and a host) have access
>> > to the same set of physical addresses.  This might be an NV-DIMM, or it might
>> > just be DRAM (for the purposes of reducing guest overhead).  The network
>> > filesystem has been enhanced with a call to allow the client to ask the server
>> > "What is the physical address for this range of bytes in this file?"
>> >
>> > We don't want to use the guest pagecache here.  That's antithetical to the
>> > second usage, and it's inefficient for the first usage.
>>
>> And the answer is that you need a dax device for whatever memoery exposed
>> in this way, as it needs to show up in the memory map for example.
>
> Wow, DAX devices look painful and awful.  I certainly don't want to be exposing the memory fronted by my network filesystem to userspace to access.  That just seems like a world of pain and bad experiences.  Absolutely the filesystem (or perhaps better, the ACPI tables) need to mark that chunk of memory as reserved, but it's definitely not available for anyone to access without the filesystem being aware.
>
> Even if we let the filesystem create a DAX device that doesn't show up in /dev (for example), Dan's patches don't give us a way to go from a file on the filesystem to a set of dax_ops.  And it does need to be a per-file operation, eg to support a file on an XFS volume which might be on a RT device or a normal device.  That was why I leaned towards an address_space operation, but I'd be happy to see an inode_operation instead.

How about we solve the copy_from_user() abuse first before we hijack
this thread for some future feature that afaics has no patches posted
yet.

An incremental step towards disentangling filesystem-dax from
block_devices is a lookup mechanism to go from a block_device to a dax
object that holds dax_ops. When this brave new filesystem enabling
appears it can grow a mechanism to lookup, or mount on, the dax object
directly.

One idea is to just hang a pointer to this dax object off of
bdev_inode, set at bdev open() time.

WARNING: multiple messages have this Message-ID (diff)
From: Dan Williams <dan.j.williams-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
To: Matthew Wilcox <mawilcox-0li6OtcxBFHby3iVrkZq2A@public.gmane.org>
Cc: Tony Luck <tony.luck-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>,
	Jan Kara <jack-AlSwsSmVLrQ@public.gmane.org>,
	Mike Snitzer <snitzer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>,
	"linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org"
	<linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org>,
	"x86-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org"
	<x86-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>,
	"linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org"
	<linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	Jens Axboe <axboe-b10kYP2dOMg@public.gmane.org>,
	Ingo Molnar <mingo-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>,
	Al Viro <viro-RmSDqhL/yNMiFSDQTTA3OLVCufUGDwFn@public.gmane.org>,
	"H. Peter Anvin" <hpa-YMNOUZJC4hwAvxtiuMwx3w@public.gmane.org>,
	"linux-fsdevel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org"
	<linux-fsdevel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	Thomas Gleixner <tglx-hfZtesqFncYOwBW4kG4KsQ@public.gmane.org>,
	"dm-devel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org"
	<dm-devel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>,
	Linus Torvalds
	<torvalds-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>,
	Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
Subject: Re: [PATCH 00/13] dax, pmem: move cpu cache maintenance to libnvdimm
Date: Sun, 22 Jan 2017 23:10:04 -0800	[thread overview]
Message-ID: <CAPcyv4h83wk5SGVPY+8nUZ1e4Gq9fz61GE3BhAdzGi=HFAuDcg@mail.gmail.com> (raw)
In-Reply-To: <BY2PR21MB0036CA85562DDD21814C0B27CB720-vtcBUbTck+B5JOYzoceCCc1VXTxX1y3OvxpqHgZTriW3zl9H0oFU5g@public.gmane.org>

On Sun, Jan 22, 2017 at 10:37 PM, Matthew Wilcox <mawilcox-0li6OtcxBFHby3iVrkZq2A@public.gmane.org> wrote:
> From: Christoph Hellwig [mailto:hch-jcswGhMUV9g@public.gmane.org]
>> On Sun, Jan 22, 2017 at 06:39:28PM +0000, Matthew Wilcox wrote:
>> > Two guests on the same physical machine (or a guest and a host) have access
>> > to the same set of physical addresses.  This might be an NV-DIMM, or it might
>> > just be DRAM (for the purposes of reducing guest overhead).  The network
>> > filesystem has been enhanced with a call to allow the client to ask the server
>> > "What is the physical address for this range of bytes in this file?"
>> >
>> > We don't want to use the guest pagecache here.  That's antithetical to the
>> > second usage, and it's inefficient for the first usage.
>>
>> And the answer is that you need a dax device for whatever memoery exposed
>> in this way, as it needs to show up in the memory map for example.
>
> Wow, DAX devices look painful and awful.  I certainly don't want to be exposing the memory fronted by my network filesystem to userspace to access.  That just seems like a world of pain and bad experiences.  Absolutely the filesystem (or perhaps better, the ACPI tables) need to mark that chunk of memory as reserved, but it's definitely not available for anyone to access without the filesystem being aware.
>
> Even if we let the filesystem create a DAX device that doesn't show up in /dev (for example), Dan's patches don't give us a way to go from a file on the filesystem to a set of dax_ops.  And it does need to be a per-file operation, eg to support a file on an XFS volume which might be on a RT device or a normal device.  That was why I leaned towards an address_space operation, but I'd be happy to see an inode_operation instead.

How about we solve the copy_from_user() abuse first before we hijack
this thread for some future feature that afaics has no patches posted
yet.

An incremental step towards disentangling filesystem-dax from
block_devices is a lookup mechanism to go from a block_device to a dax
object that holds dax_ops. When this brave new filesystem enabling
appears it can grow a mechanism to lookup, or mount on, the dax object
directly.

One idea is to just hang a pointer to this dax object off of
bdev_inode, set at bdev open() time.

  reply	other threads:[~2017-01-23  7:10 UTC|newest]

Thread overview: 126+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-01-20  3:50 [PATCH 00/13] dax, pmem: move cpu cache maintenance to libnvdimm Dan Williams
2017-01-20  3:50 ` Dan Williams
2017-01-20  3:50 ` Dan Williams
2017-01-20  3:50 ` Dan Williams
2017-01-20  3:50 ` [PATCH 01/13] x86, dax, pmem: remove indirection around memcpy_from_pmem() Dan Williams
2017-01-20  3:50   ` Dan Williams
2017-01-20  3:50   ` Dan Williams
2017-01-20  3:50 ` [PATCH 02/13] block, dax: introduce dax_operations Dan Williams
2017-01-20  3:50   ` Dan Williams
2017-01-20  3:50   ` Dan Williams
2017-01-20 17:28   ` Dan Williams
2017-01-20 17:28     ` Dan Williams
2017-01-20 17:28     ` Dan Williams
2017-01-20 17:28     ` Dan Williams
2017-01-20  3:50 ` [PATCH 03/13] x86, dax, pmem: introduce 'copy_from_iter' dax operation Dan Williams
2017-01-20  3:50   ` Dan Williams
2017-01-20  3:50   ` Dan Williams
2017-02-03  1:52   ` [lkp-robot] [x86, dax, pmem] 2e12109d1c: fio.write_bw_MBps -75% regression kernel test robot
2017-02-03  1:52     ` kernel test robot
2017-02-03  1:52     ` kernel test robot
2017-02-03  1:52     ` kernel test robot
2017-02-17  3:52   ` [PATCH 03/13] x86, dax, pmem: introduce 'copy_from_iter' dax operation Ross Zwisler
2017-02-17  3:52     ` Ross Zwisler
2017-02-17  3:52     ` Ross Zwisler
2017-02-17  3:56     ` Dan Williams
2017-02-17  3:56       ` Dan Williams
2017-02-17  3:56       ` Dan Williams
2017-01-20  3:50 ` [PATCH 04/13] dax, pmem: introduce an optional 'flush' " Dan Williams
2017-01-20  3:50   ` Dan Williams
2017-01-20  3:50   ` Dan Williams
2017-01-20  3:50 ` [PATCH 05/13] x86, dax: replace clear_pmem() with open coded memset + dax_ops->flush Dan Williams
2017-01-20  3:50   ` Dan Williams
2017-01-20  3:50   ` Dan Williams
2017-01-20 10:27   ` Jan Kara
2017-01-20 10:27     ` Jan Kara
2017-01-20 10:27     ` Jan Kara
2017-01-20 15:33     ` Dan Williams
2017-01-20 15:33       ` Dan Williams
2017-01-20 15:33       ` Dan Williams
2017-01-20  3:50 ` [PATCH 06/13] x86, dax, libnvdimm: move wb_cache_pmem() to libnvdimm Dan Williams
2017-01-20  3:50   ` Dan Williams
2017-01-20  3:50   ` Dan Williams
2017-01-20  3:50 ` [PATCH 07/13] x86, libnvdimm, pmem: move arch_invalidate_pmem() " Dan Williams
2017-01-20  3:50   ` Dan Williams
2017-01-20  3:50   ` Dan Williams
2017-01-20  3:50 ` [PATCH 08/13] x86, libnvdimm, dax: stop abusing __copy_user_nocache Dan Williams
2017-01-20  3:50   ` Dan Williams
2017-01-20  3:50   ` Dan Williams
2017-03-28 16:21   ` Ross Zwisler
2017-03-28 16:21     ` Ross Zwisler
2017-03-28 16:21     ` Ross Zwisler
2017-03-28 16:26     ` Dan Williams
2017-03-28 16:26       ` Dan Williams
2017-03-28 16:26       ` Dan Williams
2017-01-20  3:51 ` [PATCH 09/13] libnvdimm, pmem: implement cache bypass for all copy_from_iter() operations Dan Williams
2017-01-20  3:51   ` Dan Williams
2017-01-20  3:51   ` Dan Williams
2017-01-20  3:51 ` [PATCH 10/13] libnvdimm, pmem: fix persistence warning Dan Williams
2017-01-20  3:51   ` Dan Williams
2017-01-20  3:51   ` Dan Williams
2017-01-20  3:51 ` [PATCH 11/13] libnvdimm, nfit: enable support for volatile ranges Dan Williams
2017-01-20  3:51   ` Dan Williams
2017-01-20  3:51   ` Dan Williams
2017-01-20  3:51 ` [PATCH 12/13] libnvdimm, pmem: disable dax flushing when pmem is fronting a volatile region Dan Williams
2017-01-20  3:51   ` Dan Williams
2017-01-20  3:51   ` Dan Williams
2017-01-20  3:51 ` [PATCH 13/13] libnvdimm, pmem: disable dax flushing for 'cache flush on fail' platforms Dan Williams
2017-01-20  3:51   ` Dan Williams
2017-01-20  3:51   ` Dan Williams
     [not found] ` <148488421301.37913.12835362165895864897.stgit-p8uTFz9XbKj2zm6wflaqv1nYeNYlB/vhral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
2017-01-21 16:28   ` [PATCH 00/13] dax, pmem: move cpu cache maintenance to libnvdimm Matthew Wilcox
2017-01-21 16:28     ` Matthew Wilcox
2017-01-21 17:52     ` Christoph Hellwig
2017-01-21 17:52       ` Christoph Hellwig
2017-01-21 17:52       ` Christoph Hellwig
2017-01-21 17:52       ` Christoph Hellwig
     [not found]       ` <20170121175212.GA28180-jcswGhMUV9g@public.gmane.org>
2017-01-22 15:43         ` Matthew Wilcox
2017-01-22 15:43           ` Matthew Wilcox
2017-01-22 15:43           ` Matthew Wilcox
2017-01-22 16:29           ` Christoph Hellwig
2017-01-22 16:29             ` Christoph Hellwig
2017-01-22 16:29             ` Christoph Hellwig
2017-01-22 16:29             ` Christoph Hellwig
2017-01-22 18:19             ` Matthew Wilcox
2017-01-22 18:19               ` Matthew Wilcox
2017-01-22 18:30               ` Christoph Hellwig
2017-01-22 18:30                 ` Christoph Hellwig
2017-01-22 18:30                 ` Christoph Hellwig
2017-01-22 18:30                 ` Christoph Hellwig
     [not found]                 ` <20170122183046.GA7359-jcswGhMUV9g@public.gmane.org>
2017-01-22 18:39                   ` Matthew Wilcox
2017-01-22 18:39                     ` Matthew Wilcox
2017-01-22 18:39                     ` Matthew Wilcox
2017-01-22 18:44                     ` Christoph Hellwig
2017-01-22 18:44                       ` Christoph Hellwig
2017-01-22 18:44                       ` Christoph Hellwig
2017-01-22 18:44                       ` Christoph Hellwig
2017-01-23  6:37                       ` Matthew Wilcox
2017-01-23  6:37                         ` Matthew Wilcox
2017-01-23  7:10                         ` Dan Williams [this message]
2017-01-23  7:10                           ` Dan Williams
2017-01-23  7:10                           ` Dan Williams
2017-01-23  7:10                           ` Dan Williams
2017-01-23 16:00                           ` Christoph Hellwig
2017-01-23 16:00                             ` Christoph Hellwig
2017-01-23 16:00                             ` Christoph Hellwig
2017-01-23 17:14                             ` Dan Williams
2017-01-23 17:14                               ` Dan Williams
2017-01-23 17:14                               ` Dan Williams
2017-01-23 18:03                               ` Christoph Hellwig
2017-01-23 18:03                                 ` Christoph Hellwig
2017-01-23 18:03                                 ` Christoph Hellwig
2017-01-23 18:03                                 ` Christoph Hellwig
2017-01-23 18:31                                 ` Dan Williams
2017-01-23 18:31                                   ` Dan Williams
2017-01-23 18:31                                   ` Dan Williams
2017-01-23 18:31                                   ` Dan Williams
2017-01-23 15:58                         ` Christoph Hellwig
2017-01-23 15:58                           ` Christoph Hellwig
2017-01-23 15:58                           ` Christoph Hellwig
2017-01-22 17:30       ` Dan Williams
2017-01-22 17:30         ` Dan Williams
2017-01-22 17:30         ` Dan Williams
2017-01-22 17:30         ` Dan Williams
2017-01-23 16:01         ` Christoph Hellwig
2017-01-23 16:01           ` Christoph Hellwig
2017-01-23 16:01           ` Christoph Hellwig
2017-01-23 16:01           ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAPcyv4h83wk5SGVPY+8nUZ1e4Gq9fz61GE3BhAdzGi=HFAuDcg@mail.gmail.com' \
    --to=dan.j.williams@intel.com \
    --cc=axboe@fb.com \
    --cc=dm-devel@redhat.com \
    --cc=hch@lst.de \
    --cc=hpa@zytor.com \
    --cc=jack@suse.cz \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvdimm@lists.01.org \
    --cc=mawilcox@microsoft.com \
    --cc=mingo@redhat.com \
    --cc=snitzer@redhat.com \
    --cc=tglx@linutronix.de \
    --cc=tony.luck@intel.com \
    --cc=torvalds@linux-foundation.org \
    --cc=viro@zeniv.linux.org.uk \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.