From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1DDAE127B5A; Tue, 9 Apr 2024 12:40:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712666416; cv=none; b=o5OSlicDZGyQDQljSxMJiww5NhyDxNBN80OqApnqMEev6eKHF9mPeN1yqrsMEYmbdcCO7KWJFoi5hxPEqODdTUnizs6F6e7e0DkSgKW0Tkiv2cN3CGlHodTYXN2UdYU4KSvP8m92x9A1r3kdA9YuFrQn5xWOb9/SqzLZzUFyB1I= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712666416; c=relaxed/simple; bh=+oqtJgeLiQ0pjuqUx6N0owLTORtI1+SJb+pgZb0B7B0=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=uLAPOV5HXbgrQif0N9OgR1nxlJmlj9JvgSrXfel/hcsErlmULIiZnI4uFFdYIDEQ1jbiLxMpixK69jsia8S7F4VdrtC3ohqoCztM/K0ndcLv2A7gbLUTjM6w7tPYwu+HtTISwjrBirz67NlNvcLFENBaL45+Lq0BNIrqDpxymy4= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=opBd4bFw; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="opBd4bFw" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D9EA3C433C7; Tue, 9 Apr 2024 12:40:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1712666415; bh=+oqtJgeLiQ0pjuqUx6N0owLTORtI1+SJb+pgZb0B7B0=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=opBd4bFwh1SvyD9npvzfIJPDVOAHXGngLOhXLea03J17iznEmmzHxE2lnLZtcJt8B a41boBSXXEJyG56uPR3UHOofhMxlPobHi78QVSxTxlWjl6fVvlTlgqhiHlcuSqCjSn CyvYeoyMcklfDuPuI5L+GzfxiFyNwRoXPQQU8tTfFM55XkpYy6N5GjzcIjSHjyx4jG gwSHSkD0g2pyVMC6cpekwFGOnJBwqzWGwr0srnKJyTRzRuNPy7N7YWSHKcuC060jW3 PAPfhidyH3MDWoemddMDYEKQUCYdSTE6vVi5Z7PmByeGHfEedqvBxwZWIKG99jUz7d xcXFNyRXjzhbQ== Date: Tue, 9 Apr 2024 13:40:08 +0100 From: Will Deacon To: Jason Gunthorpe Cc: iommu@lists.linux.dev, Joerg Roedel , linux-arm-kernel@lists.infradead.org, Robin Murphy , Lu Baolu , Eric Auger , Jean-Philippe Brucker , Joerg Roedel , Kevin Tian , kernel test robot , Moritz Fischer , Moritz Fischer , Michael Shavit , Nicolin Chen , patches@lists.linux.dev, Shameer Kolothum , Mostafa Saleh , Tony Zhu , Yi Liu , Zhangfei Gao Subject: Re: [PATCH v6 06/29] iommu/arm-smmu-v3: Add an ops indirection to the STE code Message-ID: <20240409124007.GA23088@willie-the-truck> References: <0-v6-228e7adf25eb+4155-smmuv3_newapi_p2_jgg@nvidia.com> <6-v6-228e7adf25eb+4155-smmuv3_newapi_p2_jgg@nvidia.com> Precedence: bulk X-Mailing-List: patches@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <6-v6-228e7adf25eb+4155-smmuv3_newapi_p2_jgg@nvidia.com> User-Agent: Mutt/1.10.1 (2018-07-13) On Wed, Mar 27, 2024 at 03:07:52PM -0300, Jason Gunthorpe wrote: > Prepare to put the CD code into the same mechanism. Add an ops indirection > around all the STE specific code and make the worker functions independent > of the entry content being processed. > > get_used and sync ops are provided to hook the correct code. > > Signed-off-by: Michael Shavit > Reviewed-by: Michael Shavit > Tested-by: Nicolin Chen > Tested-by: Shameer Kolothum > Signed-off-by: Jason Gunthorpe > --- > drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 170 ++++++++++++-------- > 1 file changed, 102 insertions(+), 68 deletions(-) Any chance we can pull the STE testing stuff forward from Michael once we have the entry writing ops indirection, please? It would be nice to land that before adding the CD support, I think. > @@ -1102,17 +1111,14 @@ static bool entry_set(struct arm_smmu_device *smmu, ioasid_t sid, > * V=0 process. This relies on the IGNORED behavior described in the > * specification. > */ > -static void arm_smmu_write_ste(struct arm_smmu_master *master, u32 sid, > - struct arm_smmu_ste *entry, > - const struct arm_smmu_ste *target) > +static void arm_smmu_write_entry(struct arm_smmu_entry_writer *writer, > + __le64 *entry, const __le64 *target) > { > - unsigned int num_entry_qwords = ARRAY_SIZE(target->data); > - struct arm_smmu_device *smmu = master->smmu; > - struct arm_smmu_ste unused_update; > + __le64 unused_update[NUM_ENTRY_QWORDS]; > u8 used_qword_diff; > > used_qword_diff = > - arm_smmu_entry_qword_diff(entry, target, &unused_update); > + arm_smmu_entry_qword_diff(writer, entry, target, unused_update); > if (hweight8(used_qword_diff) == 1) { > /* > * Only one qword needs its used bits to be changed. This is a nit: This comment (lost in the diff context) refers to STEs a couple of times. Please update to e.g. "STE/CD". Will From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B065BC67861 for ; Tue, 9 Apr 2024 12:40:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=lg4M1NzYOS00YX1XMmM2EYm3+o9MO4aT3xqyJXAarYM=; b=dW8ZTptzjhYbMg z1/D8Mh5ngipuil1ft4seKLdmMB/YOOculrgHNsOAIIeTya9L1uPmnLwPFSLptuyZulEwGdvRE3FJ 6tlftK8L6tf6WvtmhEgf+lJV+ducPcbYEPGCQnEGD9Q5TvR9PKDlxm+EtOR3LovrKdUvaCFXskm20 z8AanZ4+/wDYWDTg6ei6180Tn65tHT4fQ9/aQEEf7cOu/PGNDTA12IHx0n4711FCNdzArq7b5Ttol fmOrBbL9+IT3IdhZqFtagTMl1a4HYGltOd3pcFTFIeYnN0xkgEu4dYZ0Owx1HSilsYvKvyJEJPL0X HTxcckm7QP5dCTEdICAA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1ruAlw-00000001wro-3cqe; Tue, 09 Apr 2024 12:40:20 +0000 Received: from sin.source.kernel.org ([2604:1380:40e1:4800::1]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1ruAlu-00000001wrA-14M3 for linux-arm-kernel@lists.infradead.org; Tue, 09 Apr 2024 12:40:19 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 614C8CE1E04; Tue, 9 Apr 2024 12:40:16 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D9EA3C433C7; Tue, 9 Apr 2024 12:40:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1712666415; bh=+oqtJgeLiQ0pjuqUx6N0owLTORtI1+SJb+pgZb0B7B0=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=opBd4bFwh1SvyD9npvzfIJPDVOAHXGngLOhXLea03J17iznEmmzHxE2lnLZtcJt8B a41boBSXXEJyG56uPR3UHOofhMxlPobHi78QVSxTxlWjl6fVvlTlgqhiHlcuSqCjSn CyvYeoyMcklfDuPuI5L+GzfxiFyNwRoXPQQU8tTfFM55XkpYy6N5GjzcIjSHjyx4jG gwSHSkD0g2pyVMC6cpekwFGOnJBwqzWGwr0srnKJyTRzRuNPy7N7YWSHKcuC060jW3 PAPfhidyH3MDWoemddMDYEKQUCYdSTE6vVi5Z7PmByeGHfEedqvBxwZWIKG99jUz7d xcXFNyRXjzhbQ== Date: Tue, 9 Apr 2024 13:40:08 +0100 From: Will Deacon To: Jason Gunthorpe Cc: iommu@lists.linux.dev, Joerg Roedel , linux-arm-kernel@lists.infradead.org, Robin Murphy , Lu Baolu , Eric Auger , Jean-Philippe Brucker , Joerg Roedel , Kevin Tian , kernel test robot , Moritz Fischer , Moritz Fischer , Michael Shavit , Nicolin Chen , patches@lists.linux.dev, Shameer Kolothum , Mostafa Saleh , Tony Zhu , Yi Liu , Zhangfei Gao Subject: Re: [PATCH v6 06/29] iommu/arm-smmu-v3: Add an ops indirection to the STE code Message-ID: <20240409124007.GA23088@willie-the-truck> References: <0-v6-228e7adf25eb+4155-smmuv3_newapi_p2_jgg@nvidia.com> <6-v6-228e7adf25eb+4155-smmuv3_newapi_p2_jgg@nvidia.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <6-v6-228e7adf25eb+4155-smmuv3_newapi_p2_jgg@nvidia.com> User-Agent: Mutt/1.10.1 (2018-07-13) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240409_054018_683130_91FFC0D1 X-CRM114-Status: GOOD ( 17.25 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Wed, Mar 27, 2024 at 03:07:52PM -0300, Jason Gunthorpe wrote: > Prepare to put the CD code into the same mechanism. Add an ops indirection > around all the STE specific code and make the worker functions independent > of the entry content being processed. > > get_used and sync ops are provided to hook the correct code. > > Signed-off-by: Michael Shavit > Reviewed-by: Michael Shavit > Tested-by: Nicolin Chen > Tested-by: Shameer Kolothum > Signed-off-by: Jason Gunthorpe > --- > drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 170 ++++++++++++-------- > 1 file changed, 102 insertions(+), 68 deletions(-) Any chance we can pull the STE testing stuff forward from Michael once we have the entry writing ops indirection, please? It would be nice to land that before adding the CD support, I think. > @@ -1102,17 +1111,14 @@ static bool entry_set(struct arm_smmu_device *smmu, ioasid_t sid, > * V=0 process. This relies on the IGNORED behavior described in the > * specification. > */ > -static void arm_smmu_write_ste(struct arm_smmu_master *master, u32 sid, > - struct arm_smmu_ste *entry, > - const struct arm_smmu_ste *target) > +static void arm_smmu_write_entry(struct arm_smmu_entry_writer *writer, > + __le64 *entry, const __le64 *target) > { > - unsigned int num_entry_qwords = ARRAY_SIZE(target->data); > - struct arm_smmu_device *smmu = master->smmu; > - struct arm_smmu_ste unused_update; > + __le64 unused_update[NUM_ENTRY_QWORDS]; > u8 used_qword_diff; > > used_qword_diff = > - arm_smmu_entry_qword_diff(entry, target, &unused_update); > + arm_smmu_entry_qword_diff(writer, entry, target, unused_update); > if (hweight8(used_qword_diff) == 1) { > /* > * Only one qword needs its used bits to be changed. This is a nit: This comment (lost in the diff context) refers to STEs a couple of times. Please update to e.g. "STE/CD". Will _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel