All of lore.kernel.org
 help / color / mirror / Atom feed
From: Tomasz Nowicki <tnowicki@caviumnetworks.com>
To: Nate Watterson <nwatters@codeaurora.org>,
	Robin Murphy <robin.murphy@arm.com>,
	Tomasz Nowicki <tomasz.nowicki@caviumnetworks.com>,
	joro@8bytes.org, will.deacon@arm.com
Cc: lorenzo.pieralisi@arm.com, Jayachandran.Nair@cavium.com,
	Ganapatrao.Kulkarni@cavium.com, ard.biesheuvel@linaro.org,
	linux-kernel@vger.kernel.org, iommu@lists.linux-foundation.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH 1/1] iommu/iova: Make rcache flush optional on IOVA allocation failure
Date: Tue, 19 Sep 2017 10:10:26 +0200	[thread overview]
Message-ID: <03297a05-8490-a86d-12ab-a99cf73c09ba@caviumnetworks.com> (raw)
In-Reply-To: <da87f76b-bcd4-ace8-4dd6-166dfde136e3@codeaurora.org>

Hi Nate,

On 19.09.2017 04:57, Nate Watterson wrote:
> Hi Tomasz,
> 
> On 9/18/2017 12:02 PM, Robin Murphy wrote:
>> Hi Tomasz,
>>
>> On 18/09/17 11:56, Tomasz Nowicki wrote:
>>> Since IOVA allocation failure is not unusual case we need to flush
>>> CPUs' rcache in hope we will succeed in next round.
>>>
>>> However, it is useful to decide whether we need rcache flush step 
>>> because
>>> of two reasons:
>>> - Not scalability. On large system with ~100 CPUs iterating and flushing
>>>    rcache for each CPU becomes serious bottleneck so we may want to 
>>> deffer it.
> s/deffer/defer
> 
>>> - free_cpu_cached_iovas() does not care about max PFN we are 
>>> interested in.
>>>    Thus we may flush our rcaches and still get no new IOVA like in the
>>>    commonly used scenario:
>>>
>>>      if (dma_limit > DMA_BIT_MASK(32) && dev_is_pci(dev))
>>>          iova = alloc_iova_fast(iovad, iova_len, DMA_BIT_MASK(32) >> 
>>> shift);
>>>
>>>      if (!iova)
>>>          iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift);
>>>
>>>     1. First alloc_iova_fast() call is limited to DMA_BIT_MASK(32) to 
>>> get
>>>        PCI devices a SAC address
>>>     2. alloc_iova() fails due to full 32-bit space
>>>     3. rcaches contain PFNs out of 32-bit space so 
>>> free_cpu_cached_iovas()
>>>        throws entries away for nothing and alloc_iova() fails again
>>>     4. Next alloc_iova_fast() call cannot take advantage of rcache 
>>> since we
>>>        have just defeated caches. In this case we pick the slowest 
>>> option
>>>        to proceed.
>>>
>>> This patch reworks flushed_rcache local flag to be additional function
>>> argument instead and control rcache flush step. Also, it updates all 
>>> users
>>> to do the flush as the last chance.
>>
>> Looks like you've run into the same thing Nate found[1] - I came up with
>> almost the exact same patch, only with separate alloc_iova_fast() and
>> alloc_iova_fast_noretry() wrapper functions, but on reflection, just
>> exposing the bool to callers is probably simpler. One nit, can you
>> document it in the kerneldoc comment too? With that:
>>
>> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
>>
>> Thanks,
>> Robin.
>>
>> [1]:https://www.mail-archive.com/iommu@lists.linux-foundation.org/msg19758.html 
>>
> This patch completely resolves the issue I reported in [1]!!

I somehow missed your observations in [1] :/
Anyway, it's great it fixes performance for you too.

> Tested-by: Nate Watterson <nwatters@codeaurora.org>

Thanks!
Tomasz

WARNING: multiple messages have this Message-ID (diff)
From: Tomasz Nowicki <tnowicki-M3mlKVOIwJVv6pq1l3V1OdBPR1lH4CV8@public.gmane.org>
To: Nate Watterson <nwatters-sgV2jX0FEOL9JmXXK+q4OQ@public.gmane.org>,
	Robin Murphy <robin.murphy-5wv7dgnIgG8@public.gmane.org>,
	Tomasz Nowicki
	<tomasz.nowicki-M3mlKVOIwJVv6pq1l3V1OdBPR1lH4CV8@public.gmane.org>,
	joro-zLv9SwRftAIdnm+yROfE0A@public.gmane.org,
	will.deacon-5wv7dgnIgG8@public.gmane.org
Cc: Jayachandran.Nair-YGCgFSpz5w/QT0dZR+AlfA@public.gmane.org,
	ard.biesheuvel-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org,
	Ganapatrao.Kulkarni-YGCgFSpz5w/QT0dZR+AlfA@public.gmane.org,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org
Subject: Re: [PATCH 1/1] iommu/iova: Make rcache flush optional on IOVA allocation failure
Date: Tue, 19 Sep 2017 10:10:26 +0200	[thread overview]
Message-ID: <03297a05-8490-a86d-12ab-a99cf73c09ba@caviumnetworks.com> (raw)
In-Reply-To: <da87f76b-bcd4-ace8-4dd6-166dfde136e3-sgV2jX0FEOL9JmXXK+q4OQ@public.gmane.org>

Hi Nate,

On 19.09.2017 04:57, Nate Watterson wrote:
> Hi Tomasz,
> 
> On 9/18/2017 12:02 PM, Robin Murphy wrote:
>> Hi Tomasz,
>>
>> On 18/09/17 11:56, Tomasz Nowicki wrote:
>>> Since IOVA allocation failure is not unusual case we need to flush
>>> CPUs' rcache in hope we will succeed in next round.
>>>
>>> However, it is useful to decide whether we need rcache flush step 
>>> because
>>> of two reasons:
>>> - Not scalability. On large system with ~100 CPUs iterating and flushing
>>>    rcache for each CPU becomes serious bottleneck so we may want to 
>>> deffer it.
> s/deffer/defer
> 
>>> - free_cpu_cached_iovas() does not care about max PFN we are 
>>> interested in.
>>>    Thus we may flush our rcaches and still get no new IOVA like in the
>>>    commonly used scenario:
>>>
>>>      if (dma_limit > DMA_BIT_MASK(32) && dev_is_pci(dev))
>>>          iova = alloc_iova_fast(iovad, iova_len, DMA_BIT_MASK(32) >> 
>>> shift);
>>>
>>>      if (!iova)
>>>          iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift);
>>>
>>>     1. First alloc_iova_fast() call is limited to DMA_BIT_MASK(32) to 
>>> get
>>>        PCI devices a SAC address
>>>     2. alloc_iova() fails due to full 32-bit space
>>>     3. rcaches contain PFNs out of 32-bit space so 
>>> free_cpu_cached_iovas()
>>>        throws entries away for nothing and alloc_iova() fails again
>>>     4. Next alloc_iova_fast() call cannot take advantage of rcache 
>>> since we
>>>        have just defeated caches. In this case we pick the slowest 
>>> option
>>>        to proceed.
>>>
>>> This patch reworks flushed_rcache local flag to be additional function
>>> argument instead and control rcache flush step. Also, it updates all 
>>> users
>>> to do the flush as the last chance.
>>
>> Looks like you've run into the same thing Nate found[1] - I came up with
>> almost the exact same patch, only with separate alloc_iova_fast() and
>> alloc_iova_fast_noretry() wrapper functions, but on reflection, just
>> exposing the bool to callers is probably simpler. One nit, can you
>> document it in the kerneldoc comment too? With that:
>>
>> Reviewed-by: Robin Murphy <robin.murphy-5wv7dgnIgG8@public.gmane.org>
>>
>> Thanks,
>> Robin.
>>
>> [1]:https://www.mail-archive.com/iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org/msg19758.html 
>>
> This patch completely resolves the issue I reported in [1]!!

I somehow missed your observations in [1] :/
Anyway, it's great it fixes performance for you too.

> Tested-by: Nate Watterson <nwatters-sgV2jX0FEOL9JmXXK+q4OQ@public.gmane.org>

Thanks!
Tomasz

WARNING: multiple messages have this Message-ID (diff)
From: tnowicki@caviumnetworks.com (Tomasz Nowicki)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH 1/1] iommu/iova: Make rcache flush optional on IOVA allocation failure
Date: Tue, 19 Sep 2017 10:10:26 +0200	[thread overview]
Message-ID: <03297a05-8490-a86d-12ab-a99cf73c09ba@caviumnetworks.com> (raw)
In-Reply-To: <da87f76b-bcd4-ace8-4dd6-166dfde136e3@codeaurora.org>

Hi Nate,

On 19.09.2017 04:57, Nate Watterson wrote:
> Hi Tomasz,
> 
> On 9/18/2017 12:02 PM, Robin Murphy wrote:
>> Hi Tomasz,
>>
>> On 18/09/17 11:56, Tomasz Nowicki wrote:
>>> Since IOVA allocation failure is not unusual case we need to flush
>>> CPUs' rcache in hope we will succeed in next round.
>>>
>>> However, it is useful to decide whether we need rcache flush step 
>>> because
>>> of two reasons:
>>> - Not scalability. On large system with ~100 CPUs iterating and flushing
>>>    rcache for each CPU becomes serious bottleneck so we may want to 
>>> deffer it.
> s/deffer/defer
> 
>>> - free_cpu_cached_iovas() does not care about max PFN we are 
>>> interested in.
>>>    Thus we may flush our rcaches and still get no new IOVA like in the
>>>    commonly used scenario:
>>>
>>>      if (dma_limit > DMA_BIT_MASK(32) && dev_is_pci(dev))
>>>          iova = alloc_iova_fast(iovad, iova_len, DMA_BIT_MASK(32) >> 
>>> shift);
>>>
>>>      if (!iova)
>>>          iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift);
>>>
>>>     1. First alloc_iova_fast() call is limited to DMA_BIT_MASK(32) to 
>>> get
>>>        PCI devices a SAC address
>>>     2. alloc_iova() fails due to full 32-bit space
>>>     3. rcaches contain PFNs out of 32-bit space so 
>>> free_cpu_cached_iovas()
>>>        throws entries away for nothing and alloc_iova() fails again
>>>     4. Next alloc_iova_fast() call cannot take advantage of rcache 
>>> since we
>>>        have just defeated caches. In this case we pick the slowest 
>>> option
>>>        to proceed.
>>>
>>> This patch reworks flushed_rcache local flag to be additional function
>>> argument instead and control rcache flush step. Also, it updates all 
>>> users
>>> to do the flush as the last chance.
>>
>> Looks like you've run into the same thing Nate found[1] - I came up with
>> almost the exact same patch, only with separate alloc_iova_fast() and
>> alloc_iova_fast_noretry() wrapper functions, but on reflection, just
>> exposing the bool to callers is probably simpler. One nit, can you
>> document it in the kerneldoc comment too? With that:
>>
>> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
>>
>> Thanks,
>> Robin.
>>
>> [1]:https://www.mail-archive.com/iommu at lists.linux-foundation.org/msg19758.html 
>>
> This patch completely resolves the issue I reported in [1]!!

I somehow missed your observations in [1] :/
Anyway, it's great it fixes performance for you too.

> Tested-by: Nate Watterson <nwatters@codeaurora.org>

Thanks!
Tomasz

  reply	other threads:[~2017-09-19  8:10 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-09-18 10:56 [PATCH 0/1] Optimise IOVA allocations for PCI devices Tomasz Nowicki
2017-09-18 10:56 ` Tomasz Nowicki
2017-09-18 10:56 ` [PATCH 1/1] iommu/iova: Make rcache flush optional on IOVA allocation failure Tomasz Nowicki
2017-09-18 10:56   ` Tomasz Nowicki
2017-09-18 16:02   ` Robin Murphy
2017-09-18 16:02     ` Robin Murphy
2017-09-19  2:57     ` Nate Watterson
2017-09-19  2:57       ` Nate Watterson
2017-09-19  2:57       ` Nate Watterson
2017-09-19  8:10       ` Tomasz Nowicki [this message]
2017-09-19  8:10         ` Tomasz Nowicki
2017-09-19  8:10         ` Tomasz Nowicki
2017-09-19  8:03     ` Tomasz Nowicki
2017-09-19  8:03       ` Tomasz Nowicki

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=03297a05-8490-a86d-12ab-a99cf73c09ba@caviumnetworks.com \
    --to=tnowicki@caviumnetworks.com \
    --cc=Ganapatrao.Kulkarni@cavium.com \
    --cc=Jayachandran.Nair@cavium.com \
    --cc=ard.biesheuvel@linaro.org \
    --cc=iommu@lists.linux-foundation.org \
    --cc=joro@8bytes.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lorenzo.pieralisi@arm.com \
    --cc=nwatters@codeaurora.org \
    --cc=robin.murphy@arm.com \
    --cc=tomasz.nowicki@caviumnetworks.com \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.