From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.0 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7662FC433E3 for ; Thu, 30 Jul 2020 00:14:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 60ACA207F5 for ; Thu, 30 Jul 2020 00:14:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728267AbgG3AOZ (ORCPT ); Wed, 29 Jul 2020 20:14:25 -0400 Received: from mga09.intel.com ([134.134.136.24]:33272 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728224AbgG3AOX (ORCPT ); Wed, 29 Jul 2020 20:14:23 -0400 IronPort-SDR: XmvUejfHWaIQh5IFJpnZq6a1ls+0+WRJmqNIgJtO/kgSXZW7zd5AvActPfmo0tzR66s1Vu8pGr E4hRvyaEbgXQ== X-IronPort-AV: E=McAfee;i="6000,8403,9697"; a="152748849" X-IronPort-AV: E=Sophos;i="5.75,412,1589266800"; d="scan'208";a="152748849" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jul 2020 17:14:20 -0700 IronPort-SDR: 9AjjmJSr1OEp1ZVGk482WkDeR5kJ+tA/1yW+qFE9V1AqA/Oh+JvwzarGVol0b7m/9Xbm/nYfLN OGj+8j+/W8wQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,412,1589266800"; d="scan'208";a="286680248" Received: from jacob-builder.jf.intel.com ([10.7.199.155]) by orsmga003.jf.intel.com with ESMTP; 29 Jul 2020 17:14:19 -0700 From: Jacob Pan To: iommu@lists.linux-foundation.org, LKML , Joerg Roedel , Alex Williamson Cc: "Lu Baolu" , David Woodhouse , Yi Liu , "Tian, Kevin" , Raj Ashok , "Christoph Hellwig" , Jean-Philippe Brucker , Eric Auger , Jonathan Corbet , Jacob Pan Subject: [PATCH v7 2/7] iommu/uapi: Add argsz for user filled data Date: Wed, 29 Jul 2020 17:21:02 -0700 Message-Id: <1596068467-49322-3-git-send-email-jacob.jun.pan@linux.intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1596068467-49322-1-git-send-email-jacob.jun.pan@linux.intel.com> References: <1596068467-49322-1-git-send-email-jacob.jun.pan@linux.intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org As IOMMU UAPI gets extended, user data size may increase. To support backward compatibiliy, this patch introduces a size field to each UAPI data structures. It is *always* the responsibility for the user to fill in the correct size. Padding fields are adjusted to ensure 8 byte alignment. Specific scenarios for user data handling are documented in: Documentation/userspace-api/iommu.rst Signed-off-by: Liu Yi L Signed-off-by: Jacob Pan --- include/uapi/linux/iommu.h | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/include/uapi/linux/iommu.h b/include/uapi/linux/iommu.h index e907b7091a46..d5e9014f690e 100644 --- a/include/uapi/linux/iommu.h +++ b/include/uapi/linux/iommu.h @@ -135,6 +135,7 @@ enum iommu_page_response_code { /** * struct iommu_page_response - Generic page response information + * @argsz: User filled size of this data * @version: API version of this structure * @flags: encodes whether the corresponding fields are valid * (IOMMU_FAULT_PAGE_RESPONSE_* values) @@ -143,6 +144,7 @@ enum iommu_page_response_code { * @code: response code from &enum iommu_page_response_code */ struct iommu_page_response { + __u32 argsz; #define IOMMU_PAGE_RESP_VERSION_1 1 __u32 version; #define IOMMU_PAGE_RESP_PASID_VALID (1 << 0) @@ -218,6 +220,7 @@ struct iommu_inv_pasid_info { /** * struct iommu_cache_invalidate_info - First level/stage invalidation * information + * @argsz: User filled size of this data * @version: API version of this structure * @cache: bitfield that allows to select which caches to invalidate * @granularity: defines the lowest granularity used for the invalidation: @@ -246,6 +249,7 @@ struct iommu_inv_pasid_info { * must support the used granularity. */ struct iommu_cache_invalidate_info { + __u32 argsz; #define IOMMU_CACHE_INVALIDATE_INFO_VERSION_1 1 __u32 version; /* IOMMU paging structure cache */ @@ -255,7 +259,7 @@ struct iommu_cache_invalidate_info { #define IOMMU_CACHE_INV_TYPE_NR (3) __u8 cache; __u8 granularity; - __u8 padding[2]; + __u8 padding[6]; union { struct iommu_inv_pasid_info pasid_info; struct iommu_inv_addr_info addr_info; @@ -292,6 +296,7 @@ struct iommu_gpasid_bind_data_vtd { /** * struct iommu_gpasid_bind_data - Information about device and guest PASID binding + * @argsz: User filled size of this data * @version: Version of this data structure * @format: PASID table entry format * @flags: Additional information on guest bind request @@ -309,17 +314,18 @@ struct iommu_gpasid_bind_data_vtd { * PASID to host PASID based on this bind data. */ struct iommu_gpasid_bind_data { + __u32 argsz; #define IOMMU_GPASID_BIND_VERSION_1 1 __u32 version; #define IOMMU_PASID_FORMAT_INTEL_VTD 1 __u32 format; + __u32 addr_width; #define IOMMU_SVA_GPASID_VAL (1 << 0) /* guest PASID valid */ __u64 flags; __u64 gpgd; __u64 hpasid; __u64 gpasid; - __u32 addr_width; - __u8 padding[12]; + __u8 padding[8]; /* Vendor specific data */ union { struct iommu_gpasid_bind_data_vtd vtd; -- 2.7.4