From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31869C54FC9 for ; Tue, 14 Apr 2020 17:04:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CD84220678 for ; Tue, 14 Apr 2020 17:04:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="y4TOb2qt" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CD84220678 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C8A958E001A; Tue, 14 Apr 2020 13:04:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C143A8E0001; Tue, 14 Apr 2020 13:04:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A66928E001A; Tue, 14 Apr 2020 13:04:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0234.hostedemail.com [216.40.44.234]) by kanga.kvack.org (Postfix) with ESMTP id 874158E0001 for ; Tue, 14 Apr 2020 13:04:30 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 549545DD6 for ; Tue, 14 Apr 2020 17:04:30 +0000 (UTC) X-FDA: 76707084300.08.wheel71_7e9a623343000 X-HE-Tag: wheel71_7e9a623343000 X-Filterd-Recvd-Size: 11306 Received: from mail-wm1-f68.google.com (mail-wm1-f68.google.com [209.85.128.68]) by imf19.hostedemail.com (Postfix) with ESMTP for ; Tue, 14 Apr 2020 17:04:29 +0000 (UTC) Received: by mail-wm1-f68.google.com with SMTP id d77so13808504wmd.3 for ; Tue, 14 Apr 2020 10:04:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=XVnU/a+9kiTjgyJABvxlIUtTDbUHnlU3sn0za3P4+KE=; b=y4TOb2qtjvNBIJUd8KcyU9EEjwdkr9J7mvW9hXz+F0XZmim7g+pYOYHb96aca0KdFW 44QBmP2GffC25/NpySyArRaBAu8Bm/U2oJuskkWo8CgAuMVAZdjlvl6r+uD0J6tcAvtL eRaMGRFt8DcaMTdXaNJAjwYOGdyr/DYezCc6h2lFUzpgk1eFvFidug0Wv9kYpv7ChbAk QNNItq4KMpflBIeWZFmjzcnZuCaGHqFrlwabWOpw1rbG1zJgqUY0ZNLJ3qukiinWagkN zp+DyNqQkMFa3040PRUNdXz1LMkoaOtNfD4+3oaBBc44OS2FQFt4kOdYrqzVRyGKxRsl 9dJg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=XVnU/a+9kiTjgyJABvxlIUtTDbUHnlU3sn0za3P4+KE=; b=rr/cXWBnUHv0wTDoXPzpWeMWnucONu0/ACMCjDJ7H01mh9SC3+2l8bZtJe2dY93OxD 687ARjLEmH4nUnn+z5nQuHcs3Ek6+4qnq1MSfGxDCf7puTpvwR1yRueQO0DXqeyCVSMB 5oGcWMe1g8B/NfV2qEYQWHcYjgdlsfgEA7RAB9P3AOP50gL657GsUta6oevCD9PRxQUY 5Zbg8q96ccDAShEdK2mg0YNYcdCVyWkmhXRZlIu13koId5wKLbpffyM/daK8Fba5OS9g mwu/8ftikBGy6Ml1nxWpnnwfhC7DinlAbeDycDoJR/fG+FB9NLAey0tIFPNcdow3kZ83 uCvg== X-Gm-Message-State: AGi0PuaLSUJVmWsthW+ZSr18SXF0CGT/jA+ZZ4uTUmfeRqJx8U1oP7/C f17Ih34ibq+ia12P7SJWQaLH3Q== X-Google-Smtp-Source: APiQypJXgDOnDjgeo1jUjf+ryj+C67fPxPR/45K6mtX7NMuGtOOFfjEt1SlUseEiGGPQRxoI9Aaa5w== X-Received: by 2002:a1c:4603:: with SMTP id t3mr763909wma.103.1586883868805; Tue, 14 Apr 2020 10:04:28 -0700 (PDT) Received: from localhost.localdomain ([2001:171b:226b:54a0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id x18sm19549147wrs.11.2020.04.14.10.04.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Apr 2020 10:04:27 -0700 (PDT) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, zhangfei.gao@linaro.org, jgg@ziepe.ca, xuzaibo@huawei.com, Jean-Philippe Brucker Subject: [PATCH v5 08/25] arm64: mm: Pin down ASIDs for sharing mm with devices Date: Tue, 14 Apr 2020 19:02:36 +0200 Message-Id: <20200414170252.714402-9-jean-philippe@linaro.org> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200414170252.714402-1-jean-philippe@linaro.org> References: <20200414170252.714402-1-jean-philippe@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: To enable address space sharing with the IOMMU, introduce mm_context_get(= ) and mm_context_put(), that pin down a context and ensure that it will kee= p its ASID after a rollover. Export the symbols to let the modular SMMUv3 driver use them. Pinning is necessary because a device constantly needs a valid ASID, unlike tasks that only require one when running. Without pinning, we woul= d need to notify the IOMMU when we're about to use a new ASID for a task, and it would get complicated when a new task is assigned a shared ASID. Consider the following scenario with no ASID pinned: 1. Task t1 is running on CPUx with shared ASID (gen=3D1, asid=3D1) 2. Task t2 is scheduled on CPUx, gets ASID (1, 2) 3. Task tn is scheduled on CPUy, a rollover occurs, tn gets ASID (2, 1) We would now have to immediately generate a new ASID for t1, notify the IOMMU, and finally enable task tn. We are holding the lock during all that time, since we can't afford having another CPU trigger a rollover. The IOMMU issues invalidation commands that can take tens of milliseconds. It gets needlessly complicated. All we wanted to do was schedule task tn, that has no business with the IOMMU. By letting the IOMMU pin tasks when needed, we avoid stalling the slow path, and let the pinning fail when we're out of shareable ASIDs. After a rollover, the allocator expects at least one ASID to be available in addition to the reserved ones (one per CPU). So (NR_ASIDS - NR_CPUS - 1) is the maximum number of ASIDs that can be shared with the IOMMU. Signed-off-by: Jean-Philippe Brucker --- v4->v5: extract helper macro --- arch/arm64/include/asm/mmu.h | 1 + arch/arm64/include/asm/mmu_context.h | 11 +++- arch/arm64/mm/context.c | 95 +++++++++++++++++++++++++++- 3 files changed, 104 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index 68140fdd89d6b..bbdd291e31d59 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -19,6 +19,7 @@ =20 typedef struct { atomic64_t id; + unsigned long pinned; void *vdso; unsigned long flags; } mm_context_t; diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/as= m/mmu_context.h index ab46187c63001..69599a64945b0 100644 --- a/arch/arm64/include/asm/mmu_context.h +++ b/arch/arm64/include/asm/mmu_context.h @@ -177,7 +177,13 @@ static inline void cpu_replace_ttbr1(pgd_t *pgdp) #define destroy_context(mm) do { } while(0) void check_and_switch_context(struct mm_struct *mm, unsigned int cpu); =20 -#define init_new_context(tsk,mm) ({ atomic64_set(&(mm)->context.id, 0); = 0; }) +static inline int +init_new_context(struct task_struct *tsk, struct mm_struct *mm) +{ + atomic64_set(&mm->context.id, 0); + mm->context.pinned =3D 0; + return 0; +} =20 #ifdef CONFIG_ARM64_SW_TTBR0_PAN static inline void update_saved_ttbr0(struct task_struct *tsk, @@ -250,6 +256,9 @@ switch_mm(struct mm_struct *prev, struct mm_struct *n= ext, void verify_cpu_asid_bits(void); void post_ttbr_update_workaround(void); =20 +unsigned long mm_context_get(struct mm_struct *mm); +void mm_context_put(struct mm_struct *mm); + #endif /* !__ASSEMBLY__ */ =20 #endif /* !__ASM_MMU_CONTEXT_H */ diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index d702d60e64dab..d0ddd413f5645 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -27,6 +27,10 @@ static DEFINE_PER_CPU(atomic64_t, active_asids); static DEFINE_PER_CPU(u64, reserved_asids); static cpumask_t tlb_flush_pending; =20 +static unsigned long max_pinned_asids; +static unsigned long nr_pinned_asids; +static unsigned long *pinned_asid_map; + #define ASID_MASK (~GENMASK(asid_bits - 1, 0)) #define ASID_FIRST_VERSION (1UL << asid_bits) =20 @@ -74,6 +78,9 @@ void verify_cpu_asid_bits(void) =20 static void set_kpti_asid_bits(void) { + unsigned int k; + u8 *dst =3D (u8 *)asid_map; + u8 *src =3D (u8 *)pinned_asid_map; unsigned int len =3D BITS_TO_LONGS(NUM_USER_ASIDS) * sizeof(unsigned lo= ng); /* * In case of KPTI kernel/user ASIDs are allocated in @@ -81,7 +88,8 @@ static void set_kpti_asid_bits(void) * is set, then the ASID will map only userspace. Thus * mark even as reserved for kernel. */ - memset(asid_map, 0xaa, len); + for (k =3D 0; k < len; k++) + dst[k] =3D src[k] | 0xaa; } =20 static void set_reserved_asid_bits(void) @@ -89,7 +97,7 @@ static void set_reserved_asid_bits(void) if (arm64_kernel_unmapped_at_el0()) set_kpti_asid_bits(); else - bitmap_clear(asid_map, 0, NUM_USER_ASIDS); + bitmap_copy(asid_map, pinned_asid_map, NUM_USER_ASIDS); } =20 #define asid_gen_match(asid) \ @@ -165,6 +173,14 @@ static u64 new_context(struct mm_struct *mm) if (check_update_reserved_asid(asid, newasid)) return newasid; =20 + /* + * If it is pinned, we can keep using it. Note that reserved + * takes priority, because even if it is also pinned, we need to + * update the generation into the reserved_asids. + */ + if (mm->context.pinned) + return newasid; + /* * We had a valid ASID in a previous life, so try to re-use * it if possible. @@ -254,6 +270,68 @@ void check_and_switch_context(struct mm_struct *mm, = unsigned int cpu) cpu_switch_mm(mm->pgd, mm); } =20 +unsigned long mm_context_get(struct mm_struct *mm) +{ + unsigned long flags; + u64 asid; + + raw_spin_lock_irqsave(&cpu_asid_lock, flags); + + asid =3D atomic64_read(&mm->context.id); + + if (mm->context.pinned) { + mm->context.pinned++; + asid &=3D ~ASID_MASK; + goto out_unlock; + } + + if (nr_pinned_asids >=3D max_pinned_asids) { + asid =3D 0; + goto out_unlock; + } + + if (!asid_gen_match(asid)) { + /* + * We went through one or more rollover since that ASID was + * used. Ensure that it is still valid, or generate a new one. + */ + asid =3D new_context(mm); + atomic64_set(&mm->context.id, asid); + } + + asid &=3D ~ASID_MASK; + + nr_pinned_asids++; + __set_bit(asid2idx(asid), pinned_asid_map); + mm->context.pinned++; + +out_unlock: + raw_spin_unlock_irqrestore(&cpu_asid_lock, flags); + + /* Set the equivalent of USER_ASID_BIT */ + if (asid && IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0)) + asid |=3D 1; + + return asid; +} +EXPORT_SYMBOL_GPL(mm_context_get); + +void mm_context_put(struct mm_struct *mm) +{ + unsigned long flags; + u64 asid =3D atomic64_read(&mm->context.id) & ~ASID_MASK; + + raw_spin_lock_irqsave(&cpu_asid_lock, flags); + + if (--mm->context.pinned =3D=3D 0) { + __clear_bit(asid2idx(asid), pinned_asid_map); + nr_pinned_asids--; + } + + raw_spin_unlock_irqrestore(&cpu_asid_lock, flags); +} +EXPORT_SYMBOL_GPL(mm_context_put); + /* Errata workaround post TTBRx_EL1 update. */ asmlinkage void post_ttbr_update_workaround(void) { @@ -303,6 +381,13 @@ static int asids_update_limit(void) WARN_ON(num_available_asids - 1 <=3D num_possible_cpus()); pr_info("ASID allocator initialised with %lu entries\n", num_available_asids); + + /* + * We assume that an ASID is always available after a rollover. This + * means that even if all CPUs have a reserved ASID, there still is at + * least one slot available in the asid map. + */ + max_pinned_asids =3D num_available_asids - num_possible_cpus() - 2; return 0; } arch_initcall(asids_update_limit); @@ -317,6 +402,12 @@ static int asids_init(void) panic("Failed to allocate bitmap for %lu ASIDs\n", NUM_USER_ASIDS); =20 + pinned_asid_map =3D kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS), + sizeof(*pinned_asid_map), GFP_KERNEL); + if (!pinned_asid_map) + panic("Failed to allocate pinned ASID bitmap\n"); + nr_pinned_asids =3D 0; + /* * We cannot call set_reserved_asid_bits() here because CPU * caps are not finalized yet, so it is safer to assume KPTI --=20 2.26.0