All of lore.kernel.org
 help / color / mirror / Atom feed
From: Nikita Yushchenko <nikita.yoush@cogentembedded.com>
To: Christoph Hellwig <hch@lst.de>
Cc: Arnd Bergmann <arnd@linaro.org>,
	linux-arm-kernel@lists.infradead.org,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will.deacon@arm.com>,
	linux-kernel@vger.kernel.org, linux-renesas-soc@vger.kernel.org,
	Simon Horman <horms@verge.net.au>,
	linux-pci@vger.kernel.org, Bjorn Helgaas <bhelgaas@google.com>,
	artemi.ivanov@cogentembedded.com,
	Keith Busch <keith.busch@intel.com>, Jens Axboe <axboe@fb.com>,
	Sagi Grimberg <sagi@grimberg.me>,
	linux-nvme@lists.infradead.org
Subject: Re: NVMe vs DMA addressing limitations
Date: Tue, 10 Jan 2017 10:31:47 +0300	[thread overview]
Message-ID: <c4fd007c-3f07-fc2a-8cbb-7152f578609d@cogentembedded.com> (raw)
In-Reply-To: <20170110070719.GA17208@lst.de>

Christoph, thanks for clear input.

Arnd, I think that given this discussion, best short-term solution is
indeed the patch I've submitted yesterday. That is, your version +
coherent mask support.  With that, set_dma_mask(DMA_BIT_MASK(64)) will
succeed and hardware with work with swiotlb.

Possible next step is to teach swiotlb to dynamically allocate bounce
buffers within entire arm64's ZONE_DMA.

Also there is some hope that R-Car *can* iommu-translate addresses that
PCIe module issues to system bus.  Although previous attempts to make
that working failed. Additional research is needed here.

Nikita

> On Tue, Jan 10, 2017 at 09:47:21AM +0300, Nikita Yushchenko wrote:
>> I'm now working with HW that:
>> - is now way "low end" or "obsolete", it has 4G of RAM and 8 CPU cores,
>> and is being manufactured and developed,
>> - has 75% of it's RAM located beyond first 4G of address space,
>> - can't physically handle incoming PCIe transactions addressed to memory
>> beyond 4G.
> 
> It might not be low end or obselete, but it's absolutely braindead.
> Your I/O performance will suffer badly for the life of the platform
> because someone tries to save 2 cents, and there is not much we can do
> about it.
> 
>> (1) it constantly runs of swiotlb space, logs are full of warnings
>> despite of rate limiting,
> 
>> Per my current understanding, blk-level bounce buffering will at least
>> help with (1) - if done properly it will allocate bounce buffers within
>> entire memory below 4G, not within dedicated swiotlb space (that is
>> small and enlarging it makes memory permanently unavailable for other
>> use).  This looks simple and safe (in sense of not anyhow breaking
>> unrelated use cases).
> 
> Yes.  Although there is absolutely no reason why swiotlb could not
> do the same.
> 
>> (2) it runs far suboptimal due to bounce-buffering almost all i/o,
>> despite of lots of free memory in area where direct DMA is possible.
> 
>> Addressing (2) looks much more difficult because different memory
>> allocation policy is required for that.
> 
> It's basically not possible.  Every piece of memory in a Linux
> kernel is a possible source of I/O, and depending on the workload
> type it might even be a the prime source of I/O.
> 
>>> NVMe should never bounce, the fact that it currently possibly does
>>> for highmem pages is a bug.
>>
>> The entire topic is absolutely not related to highmem (i.e. memory not
>> directly addressable by 32-bit kernel).
> 
> I did not say this affects you, but thanks to your mail I noticed that
> NVMe has a suboptimal setting there.  Also note that highmem does not
> have to imply a 32-bit kernel, just physical memory that is not in the
> kernel mapping.
> 
>> What we are discussing is hw-originated restriction on where DMA is
>> possible.
> 
> Yes, where hw means the SOC, and not the actual I/O device, which is an
> important distinction.
> 
>>> Or even better remove the call to dma_set_mask_and_coherent with
>>> DMA_BIT_MASK(32).  NVMe is designed around having proper 64-bit DMA
>>> addressing, there is not point in trying to pretent it works without that
>>
>> Are you claiming that NVMe driver in mainline is intentionally designed
>> to not work on HW that can't do DMA to entire 64-bit space?
> 
> It is not intenteded to handle the case where the SOC / chipset
> can't handle DMA to all physical memoery, yes.
> 
>> Such setups do exist and there is interest to make them working.
> 
> Sure, but it's not the job of the NVMe driver to work around such a broken
> system.  It's something your architecture code needs to do, maybe with
> a bit of core kernel support.
> 
>> Quite a few pages used for block I/O are allocated by filemap code - and
>> at allocation point it is known what inode page is being allocated for.
>> If this inode is from filesystem located on a known device with known
>> DMA limitations, this knowledge can be used to allocate page that can be
>> DMAed directly.
> 
> But in other cases we might never DMA to it.  Or we rarely DMA to it, say
> for a machine running databses or qemu and using lots of direct I/O. Or
> a storage target using it's local alloc_pages buffers.
> 
>> Sure there are lots of cases when at allocation time there is no idea
>> what device will run DMA on page being allocated, or perhaps page is
>> going to be shared, or whatever. Such cases unavoidably require bounce
>> buffers if page ends to be used with device with DMA limitations. But
>> still there are cases when better allocation can remove need for bounce
>> buffers - without any hurt for other cases.
> 
> It takes your max 1GB DMA addressable memoery away from other uses,
> and introduce the crazy highmem VM tuning issues we had with big
> 32-bit x86 systems in the past.
> 

WARNING: multiple messages have this Message-ID (diff)
From: Nikita Yushchenko <nikita.yoush@cogentembedded.com>
To: Christoph Hellwig <hch@lst.de>
Cc: Keith Busch <keith.busch@intel.com>,
	Sagi Grimberg <sagi@grimberg.me>, Jens Axboe <axboe@fb.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will.deacon@arm.com>,
	linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org,
	linux-renesas-soc@vger.kernel.org,
	Simon Horman <horms@verge.net.au>,
	linux-pci@vger.kernel.org, Bjorn Helgaas <bhelgaas@google.com>,
	artemi.ivanov@cogentembedded.com, Arnd Bergmann <arnd@linaro.org>,
	linux-arm-kernel@lists.infradead.org
Subject: Re: NVMe vs DMA addressing limitations
Date: Tue, 10 Jan 2017 10:31:47 +0300	[thread overview]
Message-ID: <c4fd007c-3f07-fc2a-8cbb-7152f578609d@cogentembedded.com> (raw)
In-Reply-To: <20170110070719.GA17208@lst.de>

Christoph, thanks for clear input.

Arnd, I think that given this discussion, best short-term solution is
indeed the patch I've submitted yesterday. That is, your version +
coherent mask support.  With that, set_dma_mask(DMA_BIT_MASK(64)) will
succeed and hardware with work with swiotlb.

Possible next step is to teach swiotlb to dynamically allocate bounce
buffers within entire arm64's ZONE_DMA.

Also there is some hope that R-Car *can* iommu-translate addresses that
PCIe module issues to system bus.  Although previous attempts to make
that working failed. Additional research is needed here.

Nikita

> On Tue, Jan 10, 2017 at 09:47:21AM +0300, Nikita Yushchenko wrote:
>> I'm now working with HW that:
>> - is now way "low end" or "obsolete", it has 4G of RAM and 8 CPU cores,
>> and is being manufactured and developed,
>> - has 75% of it's RAM located beyond first 4G of address space,
>> - can't physically handle incoming PCIe transactions addressed to memory
>> beyond 4G.
> 
> It might not be low end or obselete, but it's absolutely braindead.
> Your I/O performance will suffer badly for the life of the platform
> because someone tries to save 2 cents, and there is not much we can do
> about it.
> 
>> (1) it constantly runs of swiotlb space, logs are full of warnings
>> despite of rate limiting,
> 
>> Per my current understanding, blk-level bounce buffering will at least
>> help with (1) - if done properly it will allocate bounce buffers within
>> entire memory below 4G, not within dedicated swiotlb space (that is
>> small and enlarging it makes memory permanently unavailable for other
>> use).  This looks simple and safe (in sense of not anyhow breaking
>> unrelated use cases).
> 
> Yes.  Although there is absolutely no reason why swiotlb could not
> do the same.
> 
>> (2) it runs far suboptimal due to bounce-buffering almost all i/o,
>> despite of lots of free memory in area where direct DMA is possible.
> 
>> Addressing (2) looks much more difficult because different memory
>> allocation policy is required for that.
> 
> It's basically not possible.  Every piece of memory in a Linux
> kernel is a possible source of I/O, and depending on the workload
> type it might even be a the prime source of I/O.
> 
>>> NVMe should never bounce, the fact that it currently possibly does
>>> for highmem pages is a bug.
>>
>> The entire topic is absolutely not related to highmem (i.e. memory not
>> directly addressable by 32-bit kernel).
> 
> I did not say this affects you, but thanks to your mail I noticed that
> NVMe has a suboptimal setting there.  Also note that highmem does not
> have to imply a 32-bit kernel, just physical memory that is not in the
> kernel mapping.
> 
>> What we are discussing is hw-originated restriction on where DMA is
>> possible.
> 
> Yes, where hw means the SOC, and not the actual I/O device, which is an
> important distinction.
> 
>>> Or even better remove the call to dma_set_mask_and_coherent with
>>> DMA_BIT_MASK(32).  NVMe is designed around having proper 64-bit DMA
>>> addressing, there is not point in trying to pretent it works without that
>>
>> Are you claiming that NVMe driver in mainline is intentionally designed
>> to not work on HW that can't do DMA to entire 64-bit space?
> 
> It is not intenteded to handle the case where the SOC / chipset
> can't handle DMA to all physical memoery, yes.
> 
>> Such setups do exist and there is interest to make them working.
> 
> Sure, but it's not the job of the NVMe driver to work around such a broken
> system.  It's something your architecture code needs to do, maybe with
> a bit of core kernel support.
> 
>> Quite a few pages used for block I/O are allocated by filemap code - and
>> at allocation point it is known what inode page is being allocated for.
>> If this inode is from filesystem located on a known device with known
>> DMA limitations, this knowledge can be used to allocate page that can be
>> DMAed directly.
> 
> But in other cases we might never DMA to it.  Or we rarely DMA to it, say
> for a machine running databses or qemu and using lots of direct I/O. Or
> a storage target using it's local alloc_pages buffers.
> 
>> Sure there are lots of cases when at allocation time there is no idea
>> what device will run DMA on page being allocated, or perhaps page is
>> going to be shared, or whatever. Such cases unavoidably require bounce
>> buffers if page ends to be used with device with DMA limitations. But
>> still there are cases when better allocation can remove need for bounce
>> buffers - without any hurt for other cases.
> 
> It takes your max 1GB DMA addressable memoery away from other uses,
> and introduce the crazy highmem VM tuning issues we had with big
> 32-bit x86 systems in the past.
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

WARNING: multiple messages have this Message-ID (diff)
From: nikita.yoush@cogentembedded.com (Nikita Yushchenko)
Subject: NVMe vs DMA addressing limitations
Date: Tue, 10 Jan 2017 10:31:47 +0300	[thread overview]
Message-ID: <c4fd007c-3f07-fc2a-8cbb-7152f578609d@cogentembedded.com> (raw)
In-Reply-To: <20170110070719.GA17208@lst.de>

Christoph, thanks for clear input.

Arnd, I think that given this discussion, best short-term solution is
indeed the patch I've submitted yesterday. That is, your version +
coherent mask support.  With that, set_dma_mask(DMA_BIT_MASK(64)) will
succeed and hardware with work with swiotlb.

Possible next step is to teach swiotlb to dynamically allocate bounce
buffers within entire arm64's ZONE_DMA.

Also there is some hope that R-Car *can* iommu-translate addresses that
PCIe module issues to system bus.  Although previous attempts to make
that working failed. Additional research is needed here.

Nikita

> On Tue, Jan 10, 2017@09:47:21AM +0300, Nikita Yushchenko wrote:
>> I'm now working with HW that:
>> - is now way "low end" or "obsolete", it has 4G of RAM and 8 CPU cores,
>> and is being manufactured and developed,
>> - has 75% of it's RAM located beyond first 4G of address space,
>> - can't physically handle incoming PCIe transactions addressed to memory
>> beyond 4G.
> 
> It might not be low end or obselete, but it's absolutely braindead.
> Your I/O performance will suffer badly for the life of the platform
> because someone tries to save 2 cents, and there is not much we can do
> about it.
> 
>> (1) it constantly runs of swiotlb space, logs are full of warnings
>> despite of rate limiting,
> 
>> Per my current understanding, blk-level bounce buffering will at least
>> help with (1) - if done properly it will allocate bounce buffers within
>> entire memory below 4G, not within dedicated swiotlb space (that is
>> small and enlarging it makes memory permanently unavailable for other
>> use).  This looks simple and safe (in sense of not anyhow breaking
>> unrelated use cases).
> 
> Yes.  Although there is absolutely no reason why swiotlb could not
> do the same.
> 
>> (2) it runs far suboptimal due to bounce-buffering almost all i/o,
>> despite of lots of free memory in area where direct DMA is possible.
> 
>> Addressing (2) looks much more difficult because different memory
>> allocation policy is required for that.
> 
> It's basically not possible.  Every piece of memory in a Linux
> kernel is a possible source of I/O, and depending on the workload
> type it might even be a the prime source of I/O.
> 
>>> NVMe should never bounce, the fact that it currently possibly does
>>> for highmem pages is a bug.
>>
>> The entire topic is absolutely not related to highmem (i.e. memory not
>> directly addressable by 32-bit kernel).
> 
> I did not say this affects you, but thanks to your mail I noticed that
> NVMe has a suboptimal setting there.  Also note that highmem does not
> have to imply a 32-bit kernel, just physical memory that is not in the
> kernel mapping.
> 
>> What we are discussing is hw-originated restriction on where DMA is
>> possible.
> 
> Yes, where hw means the SOC, and not the actual I/O device, which is an
> important distinction.
> 
>>> Or even better remove the call to dma_set_mask_and_coherent with
>>> DMA_BIT_MASK(32).  NVMe is designed around having proper 64-bit DMA
>>> addressing, there is not point in trying to pretent it works without that
>>
>> Are you claiming that NVMe driver in mainline is intentionally designed
>> to not work on HW that can't do DMA to entire 64-bit space?
> 
> It is not intenteded to handle the case where the SOC / chipset
> can't handle DMA to all physical memoery, yes.
> 
>> Such setups do exist and there is interest to make them working.
> 
> Sure, but it's not the job of the NVMe driver to work around such a broken
> system.  It's something your architecture code needs to do, maybe with
> a bit of core kernel support.
> 
>> Quite a few pages used for block I/O are allocated by filemap code - and
>> at allocation point it is known what inode page is being allocated for.
>> If this inode is from filesystem located on a known device with known
>> DMA limitations, this knowledge can be used to allocate page that can be
>> DMAed directly.
> 
> But in other cases we might never DMA to it.  Or we rarely DMA to it, say
> for a machine running databses or qemu and using lots of direct I/O. Or
> a storage target using it's local alloc_pages buffers.
> 
>> Sure there are lots of cases when at allocation time there is no idea
>> what device will run DMA on page being allocated, or perhaps page is
>> going to be shared, or whatever. Such cases unavoidably require bounce
>> buffers if page ends to be used with device with DMA limitations. But
>> still there are cases when better allocation can remove need for bounce
>> buffers - without any hurt for other cases.
> 
> It takes your max 1GB DMA addressable memoery away from other uses,
> and introduce the crazy highmem VM tuning issues we had with big
> 32-bit x86 systems in the past.
> 

WARNING: multiple messages have this Message-ID (diff)
From: nikita.yoush@cogentembedded.com (Nikita Yushchenko)
To: linux-arm-kernel@lists.infradead.org
Subject: NVMe vs DMA addressing limitations
Date: Tue, 10 Jan 2017 10:31:47 +0300	[thread overview]
Message-ID: <c4fd007c-3f07-fc2a-8cbb-7152f578609d@cogentembedded.com> (raw)
In-Reply-To: <20170110070719.GA17208@lst.de>

Christoph, thanks for clear input.

Arnd, I think that given this discussion, best short-term solution is
indeed the patch I've submitted yesterday. That is, your version +
coherent mask support.  With that, set_dma_mask(DMA_BIT_MASK(64)) will
succeed and hardware with work with swiotlb.

Possible next step is to teach swiotlb to dynamically allocate bounce
buffers within entire arm64's ZONE_DMA.

Also there is some hope that R-Car *can* iommu-translate addresses that
PCIe module issues to system bus.  Although previous attempts to make
that working failed. Additional research is needed here.

Nikita

> On Tue, Jan 10, 2017 at 09:47:21AM +0300, Nikita Yushchenko wrote:
>> I'm now working with HW that:
>> - is now way "low end" or "obsolete", it has 4G of RAM and 8 CPU cores,
>> and is being manufactured and developed,
>> - has 75% of it's RAM located beyond first 4G of address space,
>> - can't physically handle incoming PCIe transactions addressed to memory
>> beyond 4G.
> 
> It might not be low end or obselete, but it's absolutely braindead.
> Your I/O performance will suffer badly for the life of the platform
> because someone tries to save 2 cents, and there is not much we can do
> about it.
> 
>> (1) it constantly runs of swiotlb space, logs are full of warnings
>> despite of rate limiting,
> 
>> Per my current understanding, blk-level bounce buffering will at least
>> help with (1) - if done properly it will allocate bounce buffers within
>> entire memory below 4G, not within dedicated swiotlb space (that is
>> small and enlarging it makes memory permanently unavailable for other
>> use).  This looks simple and safe (in sense of not anyhow breaking
>> unrelated use cases).
> 
> Yes.  Although there is absolutely no reason why swiotlb could not
> do the same.
> 
>> (2) it runs far suboptimal due to bounce-buffering almost all i/o,
>> despite of lots of free memory in area where direct DMA is possible.
> 
>> Addressing (2) looks much more difficult because different memory
>> allocation policy is required for that.
> 
> It's basically not possible.  Every piece of memory in a Linux
> kernel is a possible source of I/O, and depending on the workload
> type it might even be a the prime source of I/O.
> 
>>> NVMe should never bounce, the fact that it currently possibly does
>>> for highmem pages is a bug.
>>
>> The entire topic is absolutely not related to highmem (i.e. memory not
>> directly addressable by 32-bit kernel).
> 
> I did not say this affects you, but thanks to your mail I noticed that
> NVMe has a suboptimal setting there.  Also note that highmem does not
> have to imply a 32-bit kernel, just physical memory that is not in the
> kernel mapping.
> 
>> What we are discussing is hw-originated restriction on where DMA is
>> possible.
> 
> Yes, where hw means the SOC, and not the actual I/O device, which is an
> important distinction.
> 
>>> Or even better remove the call to dma_set_mask_and_coherent with
>>> DMA_BIT_MASK(32).  NVMe is designed around having proper 64-bit DMA
>>> addressing, there is not point in trying to pretent it works without that
>>
>> Are you claiming that NVMe driver in mainline is intentionally designed
>> to not work on HW that can't do DMA to entire 64-bit space?
> 
> It is not intenteded to handle the case where the SOC / chipset
> can't handle DMA to all physical memoery, yes.
> 
>> Such setups do exist and there is interest to make them working.
> 
> Sure, but it's not the job of the NVMe driver to work around such a broken
> system.  It's something your architecture code needs to do, maybe with
> a bit of core kernel support.
> 
>> Quite a few pages used for block I/O are allocated by filemap code - and
>> at allocation point it is known what inode page is being allocated for.
>> If this inode is from filesystem located on a known device with known
>> DMA limitations, this knowledge can be used to allocate page that can be
>> DMAed directly.
> 
> But in other cases we might never DMA to it.  Or we rarely DMA to it, say
> for a machine running databses or qemu and using lots of direct I/O. Or
> a storage target using it's local alloc_pages buffers.
> 
>> Sure there are lots of cases when at allocation time there is no idea
>> what device will run DMA on page being allocated, or perhaps page is
>> going to be shared, or whatever. Such cases unavoidably require bounce
>> buffers if page ends to be used with device with DMA limitations. But
>> still there are cases when better allocation can remove need for bounce
>> buffers - without any hurt for other cases.
> 
> It takes your max 1GB DMA addressable memoery away from other uses,
> and introduce the crazy highmem VM tuning issues we had with big
> 32-bit x86 systems in the past.
> 

  reply	other threads:[~2017-01-10  7:31 UTC|newest]

Thread overview: 115+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-12-29 20:45 [PATCH 1/2] arm64: dma_mapping: allow PCI host driver to limit DMA mask Nikita Yushchenko
2016-12-29 20:45 ` Nikita Yushchenko
2016-12-29 20:45 ` [PATCH 2/2] rcar-pcie: set host bridge's " Nikita Yushchenko
2016-12-29 20:45   ` Nikita Yushchenko
2016-12-29 21:18 ` [PATCH 1/2] arm64: dma_mapping: allow PCI host driver to limit " Arnd Bergmann
2017-02-16 16:12   ` Arnd Bergmann
2016-12-29 21:18   ` Arnd Bergmann
2016-12-30  9:46 ` Sergei Shtylyov
2016-12-30  9:46   ` Sergei Shtylyov
2016-12-30 10:06   ` Sergei Shtylyov
2016-12-30 10:06     ` Sergei Shtylyov
2017-01-03 18:44 ` Will Deacon
2017-01-03 18:44   ` Will Deacon
2017-01-03 19:00   ` Nikita Yushchenko
2017-01-03 19:00     ` Nikita Yushchenko
2017-01-03 19:01   ` Nikita Yushchenko
2017-01-03 19:01     ` Nikita Yushchenko
2017-01-03 19:01     ` Nikita Yushchenko
2017-01-03 20:13     ` Grygorii Strashko
2017-01-03 20:13       ` Grygorii Strashko
2017-01-03 20:13       ` Grygorii Strashko
2017-01-03 20:23       ` Nikita Yushchenko
2017-01-03 20:23         ` Nikita Yushchenko
2017-01-03 20:23         ` Nikita Yushchenko
2017-01-03 23:13   ` Arnd Bergmann
2017-01-03 23:13     ` Arnd Bergmann
2017-01-03 23:13     ` Arnd Bergmann
2017-01-04  6:24     ` Nikita Yushchenko
2017-01-04  6:24       ` Nikita Yushchenko
2017-01-04  6:24       ` Nikita Yushchenko
2017-01-04 13:29       ` Arnd Bergmann
2017-01-04 13:29         ` Arnd Bergmann
2017-01-04 13:29         ` Arnd Bergmann
2017-01-04 14:30         ` Nikita Yushchenko
2017-01-04 14:30           ` Nikita Yushchenko
2017-01-04 14:30           ` Nikita Yushchenko
2017-01-04 14:46           ` Arnd Bergmann
2017-01-04 14:46             ` Arnd Bergmann
2017-01-04 15:29             ` Nikita Yushchenko
2017-01-04 15:29               ` Nikita Yushchenko
2017-01-04 15:29               ` Nikita Yushchenko
2017-01-06 11:10               ` Arnd Bergmann
2017-01-06 11:10                 ` Arnd Bergmann
2017-01-06 11:10                 ` Arnd Bergmann
2017-01-06 13:47                 ` Nikita Yushchenko
2017-01-06 13:47                   ` Nikita Yushchenko
2017-01-06 13:47                   ` Nikita Yushchenko
2017-01-06 14:38                   ` [PATCH] arm64: do not set dma masks that device connection can't handle Nikita Yushchenko
2017-01-06 14:38                     ` Nikita Yushchenko
2017-01-06 14:45                   ` Nikita Yushchenko
2017-01-06 14:45                     ` Nikita Yushchenko
2017-01-08  7:09                     ` Sergei Shtylyov
2017-01-08  7:09                       ` Sergei Shtylyov
2017-01-09  6:56                       ` Nikita Yushchenko
2017-01-09  6:56                         ` Nikita Yushchenko
2017-01-09 14:05                   ` [PATCH 1/2] arm64: dma_mapping: allow PCI host driver to limit DMA mask Arnd Bergmann
2017-01-09 14:05                     ` Arnd Bergmann
2017-01-09 14:05                     ` Arnd Bergmann
2017-01-09 20:34                     ` Nikita Yushchenko
2017-01-09 20:34                       ` Nikita Yushchenko
2017-01-09 20:34                       ` Nikita Yushchenko
2017-01-09 20:34                       ` Nikita Yushchenko
2017-01-09 20:57                       ` Christoph Hellwig
2017-01-09 20:57                         ` Christoph Hellwig
2017-01-09 20:57                         ` Christoph Hellwig
2017-01-09 20:57                         ` Christoph Hellwig
2017-01-10  6:47                         ` NVMe vs DMA addressing limitations Nikita Yushchenko
2017-01-10  7:07                           ` Christoph Hellwig
2017-01-10  7:07                             ` Christoph Hellwig
2017-01-10  7:07                             ` Christoph Hellwig
2017-01-10  7:07                             ` Christoph Hellwig
2017-01-10  7:31                             ` Nikita Yushchenko [this message]
2017-01-10  7:31                               ` Nikita Yushchenko
2017-01-10  7:31                               ` Nikita Yushchenko
2017-01-10  7:31                               ` Nikita Yushchenko
2017-01-10 11:01                               ` Arnd Bergmann
2017-01-10 11:01                                 ` Arnd Bergmann
2017-01-10 11:01                                 ` Arnd Bergmann
2017-01-10 11:01                                 ` Arnd Bergmann
2017-01-10 14:48                                 ` Christoph Hellwig
2017-01-10 14:48                                   ` Christoph Hellwig
2017-01-10 14:48                                   ` Christoph Hellwig
2017-01-10 14:48                                   ` Christoph Hellwig
2017-01-10 15:02                                   ` Arnd Bergmann
2017-01-10 15:02                                     ` Arnd Bergmann
2017-01-10 15:02                                     ` Arnd Bergmann
2017-01-10 15:02                                     ` Arnd Bergmann
2017-01-12 10:09                                   ` Sagi Grimberg
2017-01-12 10:09                                     ` Sagi Grimberg
2017-01-12 10:09                                     ` Sagi Grimberg
2017-01-12 10:09                                     ` Sagi Grimberg
2017-01-12 11:56                                     ` Arnd Bergmann
2017-01-12 11:56                                       ` Arnd Bergmann
2017-01-12 11:56                                       ` Arnd Bergmann
2017-01-12 11:56                                       ` Arnd Bergmann
2017-01-12 13:07                                       ` Christoph Hellwig
2017-01-12 13:07                                         ` Christoph Hellwig
2017-01-12 13:07                                         ` Christoph Hellwig
2017-01-12 13:07                                         ` Christoph Hellwig
2017-01-10 10:54                             ` Arnd Bergmann
2017-01-10 10:54                               ` Arnd Bergmann
2017-01-10 10:54                               ` Arnd Bergmann
2017-01-10 10:54                               ` Arnd Bergmann
2017-01-10 10:47                         ` [PATCH 1/2] arm64: dma_mapping: allow PCI host driver to limit DMA mask Arnd Bergmann
2017-01-10 10:47                           ` Arnd Bergmann
2017-01-10 10:47                           ` Arnd Bergmann
2017-01-10 10:47                           ` Arnd Bergmann
2017-01-10 14:44                           ` Christoph Hellwig
2017-01-10 14:44                             ` Christoph Hellwig
2017-01-10 14:44                             ` Christoph Hellwig
2017-01-10 14:44                             ` Christoph Hellwig
2017-01-10 15:00                             ` Arnd Bergmann
2017-01-10 15:00                               ` Arnd Bergmann
2017-01-10 15:00                               ` Arnd Bergmann
2017-01-10 15:00                               ` Arnd Bergmann

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c4fd007c-3f07-fc2a-8cbb-7152f578609d@cogentembedded.com \
    --to=nikita.yoush@cogentembedded.com \
    --cc=arnd@linaro.org \
    --cc=artemi.ivanov@cogentembedded.com \
    --cc=axboe@fb.com \
    --cc=bhelgaas@google.com \
    --cc=catalin.marinas@arm.com \
    --cc=hch@lst.de \
    --cc=horms@verge.net.au \
    --cc=keith.busch@intel.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=linux-renesas-soc@vger.kernel.org \
    --cc=sagi@grimberg.me \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.