From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1BBB1C433EF for ; Tue, 18 Jan 2022 13:22:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A90F56B0080; Tue, 18 Jan 2022 08:22:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A416F6B0081; Tue, 18 Jan 2022 08:22:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8E1F86B0082; Tue, 18 Jan 2022 08:22:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0210.hostedemail.com [216.40.44.210]) by kanga.kvack.org (Postfix) with ESMTP id 7EDA26B0080 for ; Tue, 18 Jan 2022 08:22:44 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 4ED44181DA56B for ; Tue, 18 Jan 2022 13:22:44 +0000 (UTC) X-FDA: 79043472648.11.783F304 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by imf22.hostedemail.com (Postfix) with ESMTP id 92056C0007 for ; Tue, 18 Jan 2022 13:22:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1642512163; x=1674048163; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=sOy+oj7x4sTMI5lpkFFnOHPfnLg7BWRRkoBuCyEaorQ=; b=RVqL/O9S+ykNl9+N6QkNoBvtO1axQjaGxNzx6YM9FhNaiB8FEq8ywpJH r+yPB7yJ+lW36LBW95ZvRmEn+xqwaRAd3X8Ntxf74D+AmALU8zSWloiA5 dImzRdDQAuZink7psm/pP51CTMcxq+OPzlirhE88yndFp/PwSXIfAt6aO gxW7d/uUyOJJB+yZ3IaNqEhVDrrnDSQg477eGDBN3y9LervtCxMDI4NW/ 4uUKl7ZG3jgWG8cJen4MkrVo79eK/F517CNO2bmv7ipYh0U4SvbCcJzV+ 87I8LSz0iw3jCOvFhdbkRRzfheNItc4JjD6qyBrOH/yP6ZOHQRhMl6DLx Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10230"; a="269193668" X-IronPort-AV: E=Sophos;i="5.88,297,1635231600"; d="scan'208";a="269193668" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2022 05:22:42 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,297,1635231600"; d="scan'208";a="531791806" Received: from chaop.bj.intel.com ([10.240.192.101]) by orsmga008.jf.intel.com with ESMTP; 18 Jan 2022 05:22:35 -0800 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com Subject: [PATCH v4 06/12] KVM: Use kvm_userspace_memory_region_ext Date: Tue, 18 Jan 2022 21:21:15 +0800 Message-Id: <20220118132121.31388-7-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220118132121.31388-1-chao.p.peng@linux.intel.com> References: <20220118132121.31388-1-chao.p.peng@linux.intel.com> X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 92056C0007 X-Stat-Signature: 5p9huj5sk7zreygzubftcqbrzwtk1rd8 Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="RVqL/O9S"; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf22.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 192.55.52.88) smtp.mailfrom=chao.p.peng@linux.intel.com X-HE-Tag: 1642512163-518422 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the new extended memslot structure kvm_userspace_memory_region_ext. The extended part (private_fd/ private_offset) will be copied from userspace only when KVM_MEM_PRIVATE is set. Internally old kvm_userspace_memory_region will still be used for places where the extended fields are not needed. Signed-off-by: Yu Zhang Signed-off-by: Chao Peng --- arch/x86/kvm/x86.c | 12 ++++++------ include/linux/kvm_host.h | 4 ++-- virt/kvm/kvm_main.c | 30 ++++++++++++++++++++---------- 3 files changed, 28 insertions(+), 18 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index c194a8cbd25f..7f8d87463391 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -11572,13 +11572,13 @@ void __user * __x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa, } for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { - struct kvm_userspace_memory_region m; + struct kvm_userspace_memory_region_ext m; - m.slot = id | (i << 16); - m.flags = 0; - m.guest_phys_addr = gpa; - m.userspace_addr = hva; - m.memory_size = size; + m.region.slot = id | (i << 16); + m.region.flags = 0; + m.region.guest_phys_addr = gpa; + m.region.userspace_addr = hva; + m.region.memory_size = size; r = __kvm_set_memory_region(kvm, &m); if (r < 0) return ERR_PTR_USR(r); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 5011ac35bc50..26118a45f0bb 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -977,9 +977,9 @@ enum kvm_mr_change { }; int kvm_set_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem); + const struct kvm_userspace_memory_region_ext *region_ext); int __kvm_set_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem); + const struct kvm_userspace_memory_region_ext *region_ext); void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot); void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen); int kvm_arch_prepare_memory_region(struct kvm *kvm, diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 168d0ab93c88..ecf94e2548f7 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1815,8 +1815,9 @@ static bool kvm_check_memslot_overlap(struct kvm_memslots *slots, int id, * Must be called holding kvm->slots_lock for write. */ int __kvm_set_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem) + const struct kvm_userspace_memory_region_ext *region_ext) { + const struct kvm_userspace_memory_region *mem = ®ion_ext->region; struct kvm_memory_slot *old, *new; struct kvm_memslots *slots; enum kvm_mr_change change; @@ -1919,24 +1920,24 @@ int __kvm_set_memory_region(struct kvm *kvm, EXPORT_SYMBOL_GPL(__kvm_set_memory_region); int kvm_set_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem) + const struct kvm_userspace_memory_region_ext *region_ext) { int r; mutex_lock(&kvm->slots_lock); - r = __kvm_set_memory_region(kvm, mem); + r = __kvm_set_memory_region(kvm, region_ext); mutex_unlock(&kvm->slots_lock); return r; } EXPORT_SYMBOL_GPL(kvm_set_memory_region); static int kvm_vm_ioctl_set_memory_region(struct kvm *kvm, - struct kvm_userspace_memory_region *mem) + struct kvm_userspace_memory_region_ext *region_ext) { - if ((u16)mem->slot >= KVM_USER_MEM_SLOTS) + if ((u16)region_ext->region.slot >= KVM_USER_MEM_SLOTS) return -EINVAL; - return kvm_set_memory_region(kvm, mem); + return kvm_set_memory_region(kvm, region_ext); } #ifndef CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT @@ -4482,14 +4483,23 @@ static long kvm_vm_ioctl(struct file *filp, break; } case KVM_SET_USER_MEMORY_REGION: { - struct kvm_userspace_memory_region kvm_userspace_mem; + struct kvm_userspace_memory_region_ext region_ext; r = -EFAULT; - if (copy_from_user(&kvm_userspace_mem, argp, - sizeof(kvm_userspace_mem))) + if (copy_from_user(®ion_ext, argp, + sizeof(struct kvm_userspace_memory_region))) goto out; + if (region_ext.region.flags & KVM_MEM_PRIVATE) { + int offset = offsetof( + struct kvm_userspace_memory_region_ext, + private_offset); + if (copy_from_user(®ion_ext.private_offset, + argp + offset, + sizeof(region_ext) - offset)) + goto out; + } - r = kvm_vm_ioctl_set_memory_region(kvm, &kvm_userspace_mem); + r = kvm_vm_ioctl_set_memory_region(kvm, ®ion_ext); break; } case KVM_GET_DIRTY_LOG: { -- 2.17.1