From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75FBAC433E4 for ; Thu, 23 Jul 2020 20:10:41 +0000 (UTC) Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1F109206F4 for ; Thu, 23 Jul 2020 20:10:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1F109206F4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=us.ibm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4BCNk466bRzDrSd for ; Fri, 24 Jul 2020 06:10:36 +1000 (AEST) Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=us.ibm.com (client-ip=148.163.156.1; helo=mx0a-001b2d01.pphosted.com; envelope-from=linuxram@us.ibm.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=us.ibm.com Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4BCNfy0dFyzDrMy for ; Fri, 24 Jul 2020 06:07:53 +1000 (AEST) Received: from pps.filterd (m0098394.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 06NK3sm0154768; Thu, 23 Jul 2020 16:07:46 -0400 Received: from ppma06ams.nl.ibm.com (66.31.33a9.ip4.static.sl-reverse.com [169.51.49.102]) by mx0a-001b2d01.pphosted.com with ESMTP id 32fac3dehu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 23 Jul 2020 16:07:46 -0400 Received: from pps.filterd (ppma06ams.nl.ibm.com [127.0.0.1]) by ppma06ams.nl.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 06NJqUKr010891; Thu, 23 Jul 2020 20:07:44 GMT Received: from b06avi18626390.portsmouth.uk.ibm.com (b06avi18626390.portsmouth.uk.ibm.com [9.149.26.192]) by ppma06ams.nl.ibm.com with ESMTP id 32brbh6ed2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 23 Jul 2020 20:07:44 +0000 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06avi18626390.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 06NK6HUZ65864070 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 23 Jul 2020 20:06:17 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 5BE614C046; Thu, 23 Jul 2020 20:07:41 +0000 (GMT) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 2C5AE4C040; Thu, 23 Jul 2020 20:07:38 +0000 (GMT) Received: from oc0525413822.ibm.com (unknown [9.211.150.76]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Thu, 23 Jul 2020 20:07:37 +0000 (GMT) From: Ram Pai To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Subject: [PATCH v5 0/7] Migrate non-migrated pages of a SVM. Date: Thu, 23 Jul 2020 13:07:17 -0700 Message-Id: <1595534844-16188-1-git-send-email-linuxram@us.ibm.com> X-Mailer: git-send-email 1.8.3.1 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235, 18.0.687 definitions=2020-07-23_09:2020-07-23, 2020-07-23 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 bulkscore=0 suspectscore=0 phishscore=0 mlxlogscore=999 priorityscore=1501 adultscore=0 lowpriorityscore=0 malwarescore=0 spamscore=0 impostorscore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2007230144 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: ldufour@linux.ibm.com, linuxram@us.ibm.com, cclaudio@linux.ibm.com, bharata@linux.ibm.com, sathnaga@linux.vnet.ibm.com, aneesh.kumar@linux.ibm.com, sukadev@linux.vnet.ibm.com, bauerman@linux.ibm.com, david@gibson.dropbear.id.au Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" The time to switch a VM to Secure-VM, increases by the size of the VM. A 100GB VM takes about 7minutes. This is unacceptable. This linear increase is caused by a suboptimal behavior by the Ultravisor and the Hypervisor. The Ultravisor unnecessarily migrates all the GFN of the VM from normal-memory to secure-memory. It has to just migrate the necessary and sufficient GFNs. However when the optimization is incorporated in the Ultravisor, the Hypervisor starts misbehaving. The Hypervisor has a inbuilt assumption that the Ultravisor will explicitly request to migrate, each and every GFN of the VM. If only necessary and sufficient GFNs are requested for migration, the Hypervisor continues to manage the remaining GFNs as normal GFNs. This leads to memory corruption; manifested consistently when the SVM reboots. The same is true, when a memory slot is hotplugged into a SVM. The Hypervisor expects the ultravisor to request migration of all GFNs to secure-GFN. But the hypervisor cannot handle any H_SVM_PAGE_IN requests from the Ultravisor, done in the context of UV_REGISTER_MEM_SLOT ucall. This problem manifests as random errors in the SVM, when a memory-slot is hotplugged. This patch series automatically migrates the non-migrated pages of a SVM, and thus solves the problem. Testing: Passed rigorous testing using various sized SVMs. Changelog: v5: . This patch series includes Laurent's fix for memory hotplug/unplug . drop pages first and then delete the memslot. Otherwise the memslot does not get cleanly deleted, causing problems during reboot. . recreatable through the following set of commands . device_add pc-dimm,id=dimm1,memdev=mem1 . device_del dimm1 . device_add pc-dimm,id=dimm1,memdev=mem1 Further incorporates comments from Bharata: . fix for off-by-one while disabling migration. . code-reorganized to maximize sharing in init_start path and in memory-hotplug path . locking adjustments in mass-page migration during H_SVM_INIT_DONE. . improved recovery on error paths. . additional comments in the code for better understanding. . removed the retry-on-migration-failure code. . re-added the initial patch that adjust some prototype to overcome a git problem, where it messes up the code context. Had accidently dropped the patch in the last version. v4: . Incorported Bharata's comments: - Optimization -- replace write mmap semaphore with read mmap semphore. - disable page-merge during memory hotplug. - rearranged the patches. consolidated the page-migration-retry logic in a single patch. v3: . Optimized the page-migration retry-logic. . Relax and relinquish the cpu regularly while bulk migrating the non-migrated pages. This issue was causing soft-lockups. Fixed it. . Added a new patch, to retry page-migration a couple of times before returning H_BUSY in H_SVM_PAGE_IN. This issue was seen a few times in a 24hour continuous reboot test of the SVMs. v2: . fixed a bug observed by Laurent. The state of the GFN's associated with Secure-VMs were not reset during memslot flush. . Re-organized the code, for easier review. . Better description of the patch series. v1: . fixed a bug observed by Bharata. Pages that where paged-in and later paged-out must also be skipped from migration during H_SVM_INIT_DONE. Laurent Dufour (3): KVM: PPC: Book3S HV: migrate hot plugged memory KVM: PPC: Book3S HV: move kvmppc_svm_page_out up KVM: PPC: Book3S HV: rework secure mem slot dropping Ram Pai (4): KVM: PPC: Book3S HV: Fix function definition in book3s_hv_uvmem.c KVM: PPC: Book3S HV: Disable page merging in H_SVM_INIT_START KVM: PPC: Book3S HV: track the state GFNs associated with secure VMs KVM: PPC: Book3S HV: in H_SVM_INIT_DONE, migrate remaining normal-GFNs to secure-GFNs. Documentation/powerpc/ultravisor.rst | 3 + arch/powerpc/include/asm/kvm_book3s_uvmem.h | 16 + arch/powerpc/kvm/book3s_hv.c | 10 +- arch/powerpc/kvm/book3s_hv_uvmem.c | 690 +++++++++++++++++++++------- 4 files changed, 548 insertions(+), 171 deletions(-) -- 1.8.3.1 From mboxrd@z Thu Jan 1 00:00:00 1970 From: Ram Pai Date: Thu, 23 Jul 2020 20:07:17 +0000 Subject: [PATCH v5 0/7] Migrate non-migrated pages of a SVM. Message-Id: <1595534844-16188-1-git-send-email-linuxram@us.ibm.com> List-Id: MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: ldufour@linux.ibm.com, linuxram@us.ibm.com, cclaudio@linux.ibm.com, bharata@linux.ibm.com, sathnaga@linux.vnet.ibm.com, aneesh.kumar@linux.ibm.com, sukadev@linux.vnet.ibm.com, bauerman@linux.ibm.com, david@gibson.dropbear.id.au The time to switch a VM to Secure-VM, increases by the size of the VM. A 100GB VM takes about 7minutes. This is unacceptable. This linear increase is caused by a suboptimal behavior by the Ultravisor and the Hypervisor. The Ultravisor unnecessarily migrates all the GFN of the VM from normal-memory to secure-memory. It has to just migrate the necessary and sufficient GFNs. However when the optimization is incorporated in the Ultravisor, the Hypervisor starts misbehaving. The Hypervisor has a inbuilt assumption that the Ultravisor will explicitly request to migrate, each and every GFN of the VM. If only necessary and sufficient GFNs are requested for migration, the Hypervisor continues to manage the remaining GFNs as normal GFNs. This leads to memory corruption; manifested consistently when the SVM reboots. The same is true, when a memory slot is hotplugged into a SVM. The Hypervisor expects the ultravisor to request migration of all GFNs to secure-GFN. But the hypervisor cannot handle any H_SVM_PAGE_IN requests from the Ultravisor, done in the context of UV_REGISTER_MEM_SLOT ucall. This problem manifests as random errors in the SVM, when a memory-slot is hotplugged. This patch series automatically migrates the non-migrated pages of a SVM, and thus solves the problem. Testing: Passed rigorous testing using various sized SVMs. Changelog: v5: . This patch series includes Laurent's fix for memory hotplug/unplug . drop pages first and then delete the memslot. Otherwise the memslot does not get cleanly deleted, causing problems during reboot. . recreatable through the following set of commands . device_add pc-dimm,id=dimm1,memdev=mem1 . device_del dimm1 . device_add pc-dimm,id=dimm1,memdev=mem1 Further incorporates comments from Bharata: . fix for off-by-one while disabling migration. . code-reorganized to maximize sharing in init_start path and in memory-hotplug path . locking adjustments in mass-page migration during H_SVM_INIT_DONE. . improved recovery on error paths. . additional comments in the code for better understanding. . removed the retry-on-migration-failure code. . re-added the initial patch that adjust some prototype to overcome a git problem, where it messes up the code context. Had accidently dropped the patch in the last version. v4: . Incorported Bharata's comments: - Optimization -- replace write mmap semaphore with read mmap semphore. - disable page-merge during memory hotplug. - rearranged the patches. consolidated the page-migration-retry logic in a single patch. v3: . Optimized the page-migration retry-logic. . Relax and relinquish the cpu regularly while bulk migrating the non-migrated pages. This issue was causing soft-lockups. Fixed it. . Added a new patch, to retry page-migration a couple of times before returning H_BUSY in H_SVM_PAGE_IN. This issue was seen a few times in a 24hour continuous reboot test of the SVMs. v2: . fixed a bug observed by Laurent. The state of the GFN's associated with Secure-VMs were not reset during memslot flush. . Re-organized the code, for easier review. . Better description of the patch series. v1: . fixed a bug observed by Bharata. Pages that where paged-in and later paged-out must also be skipped from migration during H_SVM_INIT_DONE. Laurent Dufour (3): KVM: PPC: Book3S HV: migrate hot plugged memory KVM: PPC: Book3S HV: move kvmppc_svm_page_out up KVM: PPC: Book3S HV: rework secure mem slot dropping Ram Pai (4): KVM: PPC: Book3S HV: Fix function definition in book3s_hv_uvmem.c KVM: PPC: Book3S HV: Disable page merging in H_SVM_INIT_START KVM: PPC: Book3S HV: track the state GFNs associated with secure VMs KVM: PPC: Book3S HV: in H_SVM_INIT_DONE, migrate remaining normal-GFNs to secure-GFNs. Documentation/powerpc/ultravisor.rst | 3 + arch/powerpc/include/asm/kvm_book3s_uvmem.h | 16 + arch/powerpc/kvm/book3s_hv.c | 10 +- arch/powerpc/kvm/book3s_hv_uvmem.c | 690 +++++++++++++++++++++------- 4 files changed, 548 insertions(+), 171 deletions(-) -- 1.8.3.1