iommu.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
To: Will Deacon <will@kernel.org>
Cc: fenghua.yu@intel.com, catalin.marinas@arm.com,
	zhengxiang9@huawei.com, hch@infradead.org, linux-mm@kvack.org,
	iommu@lists.linux-foundation.org, zhangfei.gao@linaro.org,
	robin.murphy@arm.com, linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH v8 04/12] arm64: mm: Pin down ASIDs for sharing mm with devices
Date: Thu, 16 Jul 2020 17:44:19 +0200	[thread overview]
Message-ID: <20200716154419.GB447208@myrica> (raw)
In-Reply-To: <20200713154622.GC3072@willie-the-truck>

Hi,

Thanks for reviewing, I think these 3 or 4 patches are the trickiest in
the whole SVA series

On Mon, Jul 13, 2020 at 04:46:23PM +0100, Will Deacon wrote:
> On Thu, Jun 18, 2020 at 05:51:17PM +0200, Jean-Philippe Brucker wrote:
> > diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> > index 68140fdd89d6b..bbdd291e31d59 100644
> > --- a/arch/arm64/include/asm/mmu.h
> > +++ b/arch/arm64/include/asm/mmu.h
> > @@ -19,6 +19,7 @@
> >  
> >  typedef struct {
> >  	atomic64_t	id;
> > +	unsigned long	pinned;
> 
> bool?

It's a refcount. I'll change it to refcount_t because it looks nicer
overall, even though it doesn't need to be atomic.

> >  static void set_kpti_asid_bits(void)
> >  {
> > +	unsigned int k;
> > +	u8 *dst = (u8 *)asid_map;
> > +	u8 *src = (u8 *)pinned_asid_map;
> >  	unsigned int len = BITS_TO_LONGS(NUM_USER_ASIDS) * sizeof(unsigned long);
> >  	/*
> >  	 * In case of KPTI kernel/user ASIDs are allocated in
> > @@ -81,7 +88,8 @@ static void set_kpti_asid_bits(void)
> >  	 * is set, then the ASID will map only userspace. Thus
> >  	 * mark even as reserved for kernel.
> >  	 */
> > -	memset(asid_map, 0xaa, len);
> > +	for (k = 0; k < len; k++)
> > +		dst[k] = src[k] | 0xaa;
> 
> Can you use __bitmap_replace() here? I think it would be clearer to use the
> bitmap API wherever possible, since casting 'unsigned long *' to 'u8 *'
> just makes me worry about endianness issues (although in this case I don't
> hink it's a problem).

I can do better actually: initialize pinned_asid_map with the kernel ASID
bits at boot and just copy the bitmap on rollover.

> > +unsigned long mm_context_get(struct mm_struct *mm)
> > +{
> > +	unsigned long flags;
> > +	u64 asid;
> > +
> > +	raw_spin_lock_irqsave(&cpu_asid_lock, flags);
> > +
> > +	asid = atomic64_read(&mm->context.id);
> > +
> > +	if (mm->context.pinned) {
> > +		mm->context.pinned++;
> > +		asid &= ~ASID_MASK;
> > +		goto out_unlock;
> > +	}
> > +
> > +	if (nr_pinned_asids >= max_pinned_asids) {
> > +		asid = 0;
> > +		goto out_unlock;
> > +	}
> > +
> > +	if (!asid_gen_match(asid)) {
> > +		/*
> > +		 * We went through one or more rollover since that ASID was
> > +		 * used. Ensure that it is still valid, or generate a new one.
> > +		 */
> > +		asid = new_context(mm);
> > +		atomic64_set(&mm->context.id, asid);
> > +	}
> > +
> > +	asid &= ~ASID_MASK;
> > +
> > +	nr_pinned_asids++;
> > +	__set_bit(asid2idx(asid), pinned_asid_map);
> > +	mm->context.pinned++;
> > +
> > +out_unlock:
> > +	raw_spin_unlock_irqrestore(&cpu_asid_lock, flags);
> 
> Maybe stick the & ~ASID_MASK here so it's easier to read?
> 
> > +	/* Set the equivalent of USER_ASID_BIT */
> > +	if (asid && IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0))
> > +		asid |= 1;
> > +
> > +	return asid;
> > +}
> > +EXPORT_SYMBOL_GPL(mm_context_get);
> 
> That's quite a generic symbol name to export... maybe throw 'arm64_' in
> front of it?
> 
> > +
> > +void mm_context_put(struct mm_struct *mm)
> > +{
> > +	unsigned long flags;
> > +	u64 asid = atomic64_read(&mm->context.id) & ~ASID_MASK;
> 
> I don't think you need the masking here.

Right, asid2idx() takes care of it.

> 
> > +	raw_spin_lock_irqsave(&cpu_asid_lock, flags);
> > +
> > +	if (--mm->context.pinned == 0) {
> > +		__clear_bit(asid2idx(asid), pinned_asid_map);
> > +		nr_pinned_asids--;
> > +	}
> > +
> > +	raw_spin_unlock_irqrestore(&cpu_asid_lock, flags);
> > +}
> > +EXPORT_SYMBOL_GPL(mm_context_put);
> 
> Same naming comment here.
> 
> >  /* Errata workaround post TTBRx_EL1 update. */
> >  asmlinkage void post_ttbr_update_workaround(void)
> >  {
> > @@ -303,6 +381,13 @@ static int asids_update_limit(void)
> >  	WARN_ON(num_available_asids - 1 <= num_possible_cpus());
> >  	pr_info("ASID allocator initialised with %lu entries\n",
> >  		num_available_asids);
> > +
> > +	/*
> > +	 * We assume that an ASID is always available after a rollover. This
> > +	 * means that even if all CPUs have a reserved ASID, there still is at
> > +	 * least one slot available in the asid map.
> > +	 */
> > +	max_pinned_asids = num_available_asids - num_possible_cpus() - 2;
> 
> Is it worth checking that assumption, rather than setting max_pinned_asids
> to a massive value?

The comment isn't right, it should be something like:

	"There must always be an ASID available after a rollover. Ensure
	that, even if all CPUs have a reserved ASID and the maximum number
	of ASIDs are pinned, there still is at least one empty slot in the
	ASID map."

I do wonder if we should reduce max_pinned_asids, because the system will
probably become unusable if it gets close to a single available ASID.  But
I think we'll need to introduce higher-level controls for SVA such as
cgroups to prevent users from pinning too many ASIDs.

> 
> >  	return 0;
> >  }
> >  arch_initcall(asids_update_limit);
> > @@ -317,6 +402,12 @@ static int asids_init(void)
> >  		panic("Failed to allocate bitmap for %lu ASIDs\n",
> >  		      NUM_USER_ASIDS);
> >  
> > +	pinned_asid_map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS),
> > +				  sizeof(*pinned_asid_map), GFP_KERNEL);
> > +	if (!pinned_asid_map)
> > +		panic("Failed to allocate pinned ASID bitmap\n");
> 
> Why can't we continue in this case without support for pinned ASIDs?

Good idea, I'll change that

Thanks,
Jean
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

  reply	other threads:[~2020-07-16 15:44 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-18 15:51 [PATCH v8 00/12] iommu: Shared Virtual Addressing for SMMUv3 (PT sharing part) Jean-Philippe Brucker
2020-06-18 15:51 ` [PATCH v8 01/12] mm: Define pasid in mm Jean-Philippe Brucker
2020-06-18 15:51 ` [PATCH v8 02/12] iommu/ioasid: Add ioasid references Jean-Philippe Brucker
2020-06-18 15:51 ` [PATCH v8 03/12] iommu/sva: Add PASID helpers Jean-Philippe Brucker
2020-06-19  7:37   ` Lu Baolu
2020-06-18 15:51 ` [PATCH v8 04/12] arm64: mm: Pin down ASIDs for sharing mm with devices Jean-Philippe Brucker
2020-07-13 15:46   ` Will Deacon
2020-07-16 15:44     ` Jean-Philippe Brucker [this message]
2020-06-18 15:51 ` [PATCH v8 05/12] iommu/io-pgtable-arm: Move some definitions to a header Jean-Philippe Brucker
2020-06-18 15:51 ` [PATCH v8 06/12] arm64: cpufeature: Export symbol read_sanitised_ftr_reg() Jean-Philippe Brucker
2020-06-18 15:51 ` [PATCH v8 07/12] iommu/arm-smmu-v3: Share process page tables Jean-Philippe Brucker
2020-07-13 20:22   ` Will Deacon
2020-07-16 15:45     ` Jean-Philippe Brucker
2020-06-18 15:51 ` [PATCH v8 08/12] iommu/arm-smmu-v3: Seize private ASID Jean-Philippe Brucker
2020-07-06 12:40   ` Xiang Zheng
2020-07-06 16:07     ` Jean-Philippe Brucker
2020-06-18 15:51 ` [PATCH v8 09/12] iommu/arm-smmu-v3: Check for SVA features Jean-Philippe Brucker
2020-06-18 15:51 ` [PATCH v8 10/12] iommu/arm-smmu-v3: Add SVA device feature Jean-Philippe Brucker
2020-06-18 15:51 ` [PATCH v8 11/12] iommu/arm-smmu-v3: Implement iommu_sva_bind/unbind() Jean-Philippe Brucker
2020-06-18 15:51 ` [PATCH v8 12/12] iommu/arm-smmu-v3: Hook up ATC invalidation to mm ops Jean-Philippe Brucker
2020-07-09  9:39 ` [PATCH v8 00/12] iommu: Shared Virtual Addressing for SMMUv3 (PT sharing part) Jean-Philippe Brucker
2020-07-20 11:11 ` Will Deacon
2020-07-20 15:39   ` Jean-Philippe Brucker

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200716154419.GB447208@myrica \
    --to=jean-philippe@linaro.org \
    --cc=catalin.marinas@arm.com \
    --cc=fenghua.yu@intel.com \
    --cc=hch@infradead.org \
    --cc=iommu@lists.linux-foundation.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-mm@kvack.org \
    --cc=robin.murphy@arm.com \
    --cc=will@kernel.org \
    --cc=zhangfei.gao@linaro.org \
    --cc=zhengxiang9@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).