From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753472AbcDVNFR (ORCPT ); Fri, 22 Apr 2016 09:05:17 -0400 Received: from foss.arm.com ([217.140.101.70]:34016 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751868AbcDVNFP (ORCPT ); Fri, 22 Apr 2016 09:05:15 -0400 Subject: Re: [PATCH v7 05/10] iommu/dma-reserved-iommu: reserved binding rb-tree and helpers To: Eric Auger , eric.auger@st.com, alex.williamson@redhat.com, will.deacon@arm.com, joro@8bytes.org, tglx@linutronix.de, jason@lakedaemon.net, marc.zyngier@arm.com, christoffer.dall@linaro.org, linux-arm-kernel@lists.infradead.org References: <1461084994-2355-1-git-send-email-eric.auger@linaro.org> <1461084994-2355-6-git-send-email-eric.auger@linaro.org> <5717805A.3070602@arm.com> <5717ABF2.1030204@linaro.org> Cc: patches@linaro.org, linux-kernel@vger.kernel.org, Bharat.Bhushan@freescale.com, pranav.sawargaonkar@gmail.com, p.fedin@samsung.com, iommu@lists.linux-foundation.org, Jean-Philippe.Brucker@arm.com, julien.grall@arm.com From: Robin Murphy Message-ID: <571A2186.1050004@arm.com> Date: Fri, 22 Apr 2016 14:05:10 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.6.0 MIME-Version: 1.0 In-Reply-To: <5717ABF2.1030204@linaro.org> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 20/04/16 17:18, Eric Auger wrote: > Robin, > On 04/20/2016 03:12 PM, Robin Murphy wrote: >> On 19/04/16 17:56, Eric Auger wrote: >>> we will need to track which host physical addresses are mapped to >>> reserved IOVA. In that prospect we introduce a new RB tree indexed >>> by physical address. This RB tree only is used for reserved IOVA >>> bindings. >>> >>> It is expected this RB tree will contain very few bindings. >> >> Sounds like a good reason in favour of using a list, and thus having >> rather less code here ;) > > OK will move to a simple list. >> >>> Those >>> generally correspond to single page mapping one MSI frame (GICv2m >>> frame or ITS GITS_TRANSLATER frame). >>> >>> Signed-off-by: Eric Auger >>> >>> --- >>> v5 -> v6: >>> - add comment about @d->reserved_lock to be held >>> >>> v3 -> v4: >>> - that code was formerly in "iommu/arm-smmu: add a reserved binding RB >>> tree" >>> --- >>> drivers/iommu/dma-reserved-iommu.c | 63 >>> ++++++++++++++++++++++++++++++++++++++ >>> 1 file changed, 63 insertions(+) >>> >>> diff --git a/drivers/iommu/dma-reserved-iommu.c >>> b/drivers/iommu/dma-reserved-iommu.c >>> index 2562af0..f6fa18e 100644 >>> --- a/drivers/iommu/dma-reserved-iommu.c >>> +++ b/drivers/iommu/dma-reserved-iommu.c >>> @@ -23,6 +23,69 @@ struct reserved_iova_domain { >>> int prot; /* iommu protection attributes to be obeyed */ >>> }; >>> >>> +struct iommu_reserved_binding { >>> + struct kref kref; >>> + struct rb_node node; >>> + struct iommu_domain *domain; >> >> Hang on, the tree these are in is already embedded in a domain. Ergo we >> can't look them up without first knowing the domain they belong to, so >> what purpose does this guy serve? > this is used on the kref_put. The release function takes a kref; then we > get the container to retrieve the binding and storing the domain here > enables to unlink the node. Ah yes, I see now - that's annoyingly awkward. I think it could possibly be avoided in the list case (if the kref_put callback just did list_del_init(), the entry could then be checked for an empty list and disposed of outside the lock), but I'm not sure whether that's really worth the fuss. Oh well. Robin. > Best Regards > > Eric >> >> Robin. >> >>> + phys_addr_t addr; >>> + dma_addr_t iova; >>> + size_t size; >>> +}; >>> + >>> +/* Reserved binding RB-tree manipulation */ >>> + >>> +/* @d->reserved_lock must be held */ >>> +static struct iommu_reserved_binding *find_reserved_binding( >>> + struct iommu_domain *d, >>> + phys_addr_t start, size_t size) >>> +{ >>> + struct rb_node *node = d->reserved_binding_list.rb_node; >>> + >>> + while (node) { >>> + struct iommu_reserved_binding *binding = >>> + rb_entry(node, struct iommu_reserved_binding, node); >>> + >>> + if (start + size <= binding->addr) >>> + node = node->rb_left; >>> + else if (start >= binding->addr + binding->size) >>> + node = node->rb_right; >>> + else >>> + return binding; >>> + } >>> + >>> + return NULL; >>> +} >>> + >>> +/* @d->reserved_lock must be held */ >>> +static void link_reserved_binding(struct iommu_domain *d, >>> + struct iommu_reserved_binding *new) >>> +{ >>> + struct rb_node **link = &d->reserved_binding_list.rb_node; >>> + struct rb_node *parent = NULL; >>> + struct iommu_reserved_binding *binding; >>> + >>> + while (*link) { >>> + parent = *link; >>> + binding = rb_entry(parent, struct iommu_reserved_binding, >>> + node); >>> + >>> + if (new->addr + new->size <= binding->addr) >>> + link = &(*link)->rb_left; >>> + else >>> + link = &(*link)->rb_right; >>> + } >>> + >>> + rb_link_node(&new->node, parent, link); >>> + rb_insert_color(&new->node, &d->reserved_binding_list); >>> +} >>> + >>> +/* @d->reserved_lock must be held */ >>> +static void unlink_reserved_binding(struct iommu_domain *d, >>> + struct iommu_reserved_binding *old) >>> +{ >>> + rb_erase(&old->node, &d->reserved_binding_list); >>> +} >>> + >>> int iommu_alloc_reserved_iova_domain(struct iommu_domain *domain, >>> dma_addr_t iova, size_t size, int prot, >>> unsigned long order) >>> >> > From mboxrd@z Thu Jan 1 00:00:00 1970 From: Robin Murphy Subject: Re: [PATCH v7 05/10] iommu/dma-reserved-iommu: reserved binding rb-tree and helpers Date: Fri, 22 Apr 2016 14:05:10 +0100 Message-ID: <571A2186.1050004@arm.com> References: <1461084994-2355-1-git-send-email-eric.auger@linaro.org> <1461084994-2355-6-git-send-email-eric.auger@linaro.org> <5717805A.3070602@arm.com> <5717ABF2.1030204@linaro.org> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <5717ABF2.1030204-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: iommu-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org Errors-To: iommu-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org To: Eric Auger , eric.auger-qxv4g6HH51o@public.gmane.org, alex.williamson-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org, will.deacon-5wv7dgnIgG8@public.gmane.org, joro-zLv9SwRftAIdnm+yROfE0A@public.gmane.org, tglx-hfZtesqFncYOwBW4kG4KsQ@public.gmane.org, jason-NLaQJdtUoK4Be96aLqz0jA@public.gmane.org, marc.zyngier-5wv7dgnIgG8@public.gmane.org, christoffer.dall-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org, linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org Cc: julien.grall-5wv7dgnIgG8@public.gmane.org, patches-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org, p.fedin-Sze3O3UU22JBDgjK7y7TUQ@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org, pranav.sawargaonkar-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org List-Id: iommu@lists.linux-foundation.org On 20/04/16 17:18, Eric Auger wrote: > Robin, > On 04/20/2016 03:12 PM, Robin Murphy wrote: >> On 19/04/16 17:56, Eric Auger wrote: >>> we will need to track which host physical addresses are mapped to >>> reserved IOVA. In that prospect we introduce a new RB tree indexed >>> by physical address. This RB tree only is used for reserved IOVA >>> bindings. >>> >>> It is expected this RB tree will contain very few bindings. >> >> Sounds like a good reason in favour of using a list, and thus having >> rather less code here ;) > > OK will move to a simple list. >> >>> Those >>> generally correspond to single page mapping one MSI frame (GICv2m >>> frame or ITS GITS_TRANSLATER frame). >>> >>> Signed-off-by: Eric Auger >>> >>> --- >>> v5 -> v6: >>> - add comment about @d->reserved_lock to be held >>> >>> v3 -> v4: >>> - that code was formerly in "iommu/arm-smmu: add a reserved binding RB >>> tree" >>> --- >>> drivers/iommu/dma-reserved-iommu.c | 63 >>> ++++++++++++++++++++++++++++++++++++++ >>> 1 file changed, 63 insertions(+) >>> >>> diff --git a/drivers/iommu/dma-reserved-iommu.c >>> b/drivers/iommu/dma-reserved-iommu.c >>> index 2562af0..f6fa18e 100644 >>> --- a/drivers/iommu/dma-reserved-iommu.c >>> +++ b/drivers/iommu/dma-reserved-iommu.c >>> @@ -23,6 +23,69 @@ struct reserved_iova_domain { >>> int prot; /* iommu protection attributes to be obeyed */ >>> }; >>> >>> +struct iommu_reserved_binding { >>> + struct kref kref; >>> + struct rb_node node; >>> + struct iommu_domain *domain; >> >> Hang on, the tree these are in is already embedded in a domain. Ergo we >> can't look them up without first knowing the domain they belong to, so >> what purpose does this guy serve? > this is used on the kref_put. The release function takes a kref; then we > get the container to retrieve the binding and storing the domain here > enables to unlink the node. Ah yes, I see now - that's annoyingly awkward. I think it could possibly be avoided in the list case (if the kref_put callback just did list_del_init(), the entry could then be checked for an empty list and disposed of outside the lock), but I'm not sure whether that's really worth the fuss. Oh well. Robin. > Best Regards > > Eric >> >> Robin. >> >>> + phys_addr_t addr; >>> + dma_addr_t iova; >>> + size_t size; >>> +}; >>> + >>> +/* Reserved binding RB-tree manipulation */ >>> + >>> +/* @d->reserved_lock must be held */ >>> +static struct iommu_reserved_binding *find_reserved_binding( >>> + struct iommu_domain *d, >>> + phys_addr_t start, size_t size) >>> +{ >>> + struct rb_node *node = d->reserved_binding_list.rb_node; >>> + >>> + while (node) { >>> + struct iommu_reserved_binding *binding = >>> + rb_entry(node, struct iommu_reserved_binding, node); >>> + >>> + if (start + size <= binding->addr) >>> + node = node->rb_left; >>> + else if (start >= binding->addr + binding->size) >>> + node = node->rb_right; >>> + else >>> + return binding; >>> + } >>> + >>> + return NULL; >>> +} >>> + >>> +/* @d->reserved_lock must be held */ >>> +static void link_reserved_binding(struct iommu_domain *d, >>> + struct iommu_reserved_binding *new) >>> +{ >>> + struct rb_node **link = &d->reserved_binding_list.rb_node; >>> + struct rb_node *parent = NULL; >>> + struct iommu_reserved_binding *binding; >>> + >>> + while (*link) { >>> + parent = *link; >>> + binding = rb_entry(parent, struct iommu_reserved_binding, >>> + node); >>> + >>> + if (new->addr + new->size <= binding->addr) >>> + link = &(*link)->rb_left; >>> + else >>> + link = &(*link)->rb_right; >>> + } >>> + >>> + rb_link_node(&new->node, parent, link); >>> + rb_insert_color(&new->node, &d->reserved_binding_list); >>> +} >>> + >>> +/* @d->reserved_lock must be held */ >>> +static void unlink_reserved_binding(struct iommu_domain *d, >>> + struct iommu_reserved_binding *old) >>> +{ >>> + rb_erase(&old->node, &d->reserved_binding_list); >>> +} >>> + >>> int iommu_alloc_reserved_iova_domain(struct iommu_domain *domain, >>> dma_addr_t iova, size_t size, int prot, >>> unsigned long order) >>> >> > From mboxrd@z Thu Jan 1 00:00:00 1970 From: robin.murphy@arm.com (Robin Murphy) Date: Fri, 22 Apr 2016 14:05:10 +0100 Subject: [PATCH v7 05/10] iommu/dma-reserved-iommu: reserved binding rb-tree and helpers In-Reply-To: <5717ABF2.1030204@linaro.org> References: <1461084994-2355-1-git-send-email-eric.auger@linaro.org> <1461084994-2355-6-git-send-email-eric.auger@linaro.org> <5717805A.3070602@arm.com> <5717ABF2.1030204@linaro.org> Message-ID: <571A2186.1050004@arm.com> To: linux-arm-kernel@lists.infradead.org List-Id: linux-arm-kernel.lists.infradead.org On 20/04/16 17:18, Eric Auger wrote: > Robin, > On 04/20/2016 03:12 PM, Robin Murphy wrote: >> On 19/04/16 17:56, Eric Auger wrote: >>> we will need to track which host physical addresses are mapped to >>> reserved IOVA. In that prospect we introduce a new RB tree indexed >>> by physical address. This RB tree only is used for reserved IOVA >>> bindings. >>> >>> It is expected this RB tree will contain very few bindings. >> >> Sounds like a good reason in favour of using a list, and thus having >> rather less code here ;) > > OK will move to a simple list. >> >>> Those >>> generally correspond to single page mapping one MSI frame (GICv2m >>> frame or ITS GITS_TRANSLATER frame). >>> >>> Signed-off-by: Eric Auger >>> >>> --- >>> v5 -> v6: >>> - add comment about @d->reserved_lock to be held >>> >>> v3 -> v4: >>> - that code was formerly in "iommu/arm-smmu: add a reserved binding RB >>> tree" >>> --- >>> drivers/iommu/dma-reserved-iommu.c | 63 >>> ++++++++++++++++++++++++++++++++++++++ >>> 1 file changed, 63 insertions(+) >>> >>> diff --git a/drivers/iommu/dma-reserved-iommu.c >>> b/drivers/iommu/dma-reserved-iommu.c >>> index 2562af0..f6fa18e 100644 >>> --- a/drivers/iommu/dma-reserved-iommu.c >>> +++ b/drivers/iommu/dma-reserved-iommu.c >>> @@ -23,6 +23,69 @@ struct reserved_iova_domain { >>> int prot; /* iommu protection attributes to be obeyed */ >>> }; >>> >>> +struct iommu_reserved_binding { >>> + struct kref kref; >>> + struct rb_node node; >>> + struct iommu_domain *domain; >> >> Hang on, the tree these are in is already embedded in a domain. Ergo we >> can't look them up without first knowing the domain they belong to, so >> what purpose does this guy serve? > this is used on the kref_put. The release function takes a kref; then we > get the container to retrieve the binding and storing the domain here > enables to unlink the node. Ah yes, I see now - that's annoyingly awkward. I think it could possibly be avoided in the list case (if the kref_put callback just did list_del_init(), the entry could then be checked for an empty list and disposed of outside the lock), but I'm not sure whether that's really worth the fuss. Oh well. Robin. > Best Regards > > Eric >> >> Robin. >> >>> + phys_addr_t addr; >>> + dma_addr_t iova; >>> + size_t size; >>> +}; >>> + >>> +/* Reserved binding RB-tree manipulation */ >>> + >>> +/* @d->reserved_lock must be held */ >>> +static struct iommu_reserved_binding *find_reserved_binding( >>> + struct iommu_domain *d, >>> + phys_addr_t start, size_t size) >>> +{ >>> + struct rb_node *node = d->reserved_binding_list.rb_node; >>> + >>> + while (node) { >>> + struct iommu_reserved_binding *binding = >>> + rb_entry(node, struct iommu_reserved_binding, node); >>> + >>> + if (start + size <= binding->addr) >>> + node = node->rb_left; >>> + else if (start >= binding->addr + binding->size) >>> + node = node->rb_right; >>> + else >>> + return binding; >>> + } >>> + >>> + return NULL; >>> +} >>> + >>> +/* @d->reserved_lock must be held */ >>> +static void link_reserved_binding(struct iommu_domain *d, >>> + struct iommu_reserved_binding *new) >>> +{ >>> + struct rb_node **link = &d->reserved_binding_list.rb_node; >>> + struct rb_node *parent = NULL; >>> + struct iommu_reserved_binding *binding; >>> + >>> + while (*link) { >>> + parent = *link; >>> + binding = rb_entry(parent, struct iommu_reserved_binding, >>> + node); >>> + >>> + if (new->addr + new->size <= binding->addr) >>> + link = &(*link)->rb_left; >>> + else >>> + link = &(*link)->rb_right; >>> + } >>> + >>> + rb_link_node(&new->node, parent, link); >>> + rb_insert_color(&new->node, &d->reserved_binding_list); >>> +} >>> + >>> +/* @d->reserved_lock must be held */ >>> +static void unlink_reserved_binding(struct iommu_domain *d, >>> + struct iommu_reserved_binding *old) >>> +{ >>> + rb_erase(&old->node, &d->reserved_binding_list); >>> +} >>> + >>> int iommu_alloc_reserved_iova_domain(struct iommu_domain *domain, >>> dma_addr_t iova, size_t size, int prot, >>> unsigned long order) >>> >> >