From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE, SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0737C433F5 for ; Wed, 22 Sep 2021 10:11:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C7BB261168 for ; Wed, 22 Sep 2021 10:11:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234704AbhIVKNS (ORCPT ); Wed, 22 Sep 2021 06:13:18 -0400 Received: from foss.arm.com ([217.140.110.172]:46124 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234693AbhIVKNQ (ORCPT ); Wed, 22 Sep 2021 06:13:16 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AA1FF11B3; Wed, 22 Sep 2021 03:11:46 -0700 (PDT) Received: from [10.57.95.67] (unknown [10.57.95.67]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8F34A3F719; Wed, 22 Sep 2021 03:11:45 -0700 (PDT) Subject: Re: [RFC PATCH v4 00/39] KVM: arm64: Add Statistical Profiling Extension (SPE) support To: Alexandru Elisei , maz@kernel.org, james.morse@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org References: <20210825161815.266051-1-alexandru.elisei@arm.com> From: Suzuki K Poulose Message-ID: <963f68c8-b109-7ebb-751d-14ce46e3cdde@arm.com> Date: Wed, 22 Sep 2021 11:11:44 +0100 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Thunderbird/78.14.0 MIME-Version: 1.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-GB Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 25/08/2021 17:17, Alexandru Elisei wrote: > This is v4 of the SPE series posted at [1]. v2 can be found at [2], and the > original series at [3]. > > Statistical Profiling Extension (SPE) is an optional feature added in > ARMv8.2. It allows sampling at regular intervals of the operations executed > by the PE and storing a record of each operation in a memory buffer. A high > level overview of the extension is presented in an article on arm.com [4]. > > This is another complete rewrite of the series, and nothing is set in > stone. If you think of a better way to do things, please suggest it. > > > Features added > ============== > > The rewrite enabled me to add support for several features not > present in the previous iteration: > > - Support for heterogeneous systems, where only some of the CPUs support SPE. > This is accomplished via the KVM_ARM_VCPU_SUPPORTED_CPUS VCPU ioctl. > > - Support for VM migration with the KVM_ARM_VCPU_SPE_CTRL(KVM_ARM_VCPU_SPE_STOP) > VCPU ioctl. > > - The requirement for userspace to mlock() the guest memory has been removed, > and now userspace can make changes to memory contents after the memory is > mapped at stage 2. > > - Better debugging of guest memory pinning by printing a warning when we > get an unexpected read or write fault. This helped me catch several bugs > during development, it has already proven very useful. Many thanks to > James who suggested when reviewing v3. > > > Missing features > ================ > > I've tried to keep the series as small as possible to make it easier to review, > while implementing the core functionality needed for the SPE emulation. As such, > I've chosen to not implement several features: > > - Host profiling a guest which has the SPE feature bit set (see open > questions). > > - No errata workarounds have been implemented yet, and there are quite a few of > them for Neoverse N1 and Neoverse V1. > > - Disabling CONFIG_NUMA_BALANCING is a hack to get KVM SPE to work and I am > investigating other ways to get around automatic numa balancing, like > requiring userspace to disable it via set_mempolicy(). I am also going to > look at how VFIO gets around it. Suggestions welcome. > > - There's plenty of room for optimization. Off the top of my head, using > block mappings at stage 2, batch pinning of pages (similar to what VFIO > does), optimize the way KVM keeps track of pinned pages (using a linked > list triples the memory usage), context-switch the SPE registers on > vcpu_load/vcpu_put on VHE if the host is not profiling, locking > optimizations, etc, etc. > > - ...and others. I'm sure I'm missing at least a few things which are > important for someone. > > > Known issues > ============ > > This is an RFC, so keep in mind that almost definitely there will be scary > bugs. For example, below is a list of known issues which don't affect the > correctness of the emulation, and which I'm planning to fix in a future > iteration: > > - With CONFIG_PROVE_LOCKING=y, lockdep complains about lock contention when > the VCPU executes the dcache clean pending ops. > > - With CONFIG_PROVE_LOCKING=y, KVM will hit a BUG at > kvm_lock_all_vcpus()->mutex_trylock(&vcpu->mutex) with more than 48 > VCPUs. > > This BUG statement can also be triggered with mainline. To reproduce it, > compile kvmtool from this branch [5] and follow the instruction in the > kvmtool commit message. > > One workaround could be to stop trying to lock all VCPUs when locking a > memslot and document the fact that it is required that no VCPUs are run > before the ioctl completes, otherwise bad things might happen to the VM. > > > Open questions > ============== > > 1. Implementing support for host profiling a guest with the SPE feature > means setting the profiling buffer owning regime to EL2. While that is in > effect, PMBIDR_EL1.P will equal 1. This has two consequences: if the guest > probes SPE during this time, the driver will fail; and the guest will be > able to determine when it is profiled. I see two options here: This doesn't mean the EL2 is owning the SPE. It only tells you that a higher level EL is owning the SPE. It could as well be EL3. (e.g, MDCR_EL3.NSPB == 0 or 1). So I think this is architecturally correct, as long as we trap the guest access to other SPE registers and inject and UNDEF. Thanks Suzuki From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE, SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 13969C433EF for ; Wed, 22 Sep 2021 10:11:52 +0000 (UTC) Received: from mm01.cs.columbia.edu (mm01.cs.columbia.edu [128.59.11.253]) by mail.kernel.org (Postfix) with ESMTP id 89A1E61168 for ; Wed, 22 Sep 2021 10:11:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 89A1E61168 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=lists.cs.columbia.edu Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 078A840C88; Wed, 22 Sep 2021 06:11:51 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id wpiQECMtL87r; Wed, 22 Sep 2021 06:11:49 -0400 (EDT) Received: from mm01.cs.columbia.edu (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id AF9DB4B089; Wed, 22 Sep 2021 06:11:49 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 3751C40C88 for ; Wed, 22 Sep 2021 06:11:49 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id CSRSI7iK8oud for ; Wed, 22 Sep 2021 06:11:47 -0400 (EDT) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 3B8F14064F for ; Wed, 22 Sep 2021 06:11:47 -0400 (EDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AA1FF11B3; Wed, 22 Sep 2021 03:11:46 -0700 (PDT) Received: from [10.57.95.67] (unknown [10.57.95.67]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8F34A3F719; Wed, 22 Sep 2021 03:11:45 -0700 (PDT) Subject: Re: [RFC PATCH v4 00/39] KVM: arm64: Add Statistical Profiling Extension (SPE) support To: Alexandru Elisei , maz@kernel.org, james.morse@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org References: <20210825161815.266051-1-alexandru.elisei@arm.com> From: Suzuki K Poulose Message-ID: <963f68c8-b109-7ebb-751d-14ce46e3cdde@arm.com> Date: Wed, 22 Sep 2021 11:11:44 +0100 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Thunderbird/78.14.0 MIME-Version: 1.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> Content-Language: en-GB X-BeenThere: kvmarm@lists.cs.columbia.edu X-Mailman-Version: 2.1.14 Precedence: list List-Id: Where KVM/ARM decisions are made List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu On 25/08/2021 17:17, Alexandru Elisei wrote: > This is v4 of the SPE series posted at [1]. v2 can be found at [2], and the > original series at [3]. > > Statistical Profiling Extension (SPE) is an optional feature added in > ARMv8.2. It allows sampling at regular intervals of the operations executed > by the PE and storing a record of each operation in a memory buffer. A high > level overview of the extension is presented in an article on arm.com [4]. > > This is another complete rewrite of the series, and nothing is set in > stone. If you think of a better way to do things, please suggest it. > > > Features added > ============== > > The rewrite enabled me to add support for several features not > present in the previous iteration: > > - Support for heterogeneous systems, where only some of the CPUs support SPE. > This is accomplished via the KVM_ARM_VCPU_SUPPORTED_CPUS VCPU ioctl. > > - Support for VM migration with the KVM_ARM_VCPU_SPE_CTRL(KVM_ARM_VCPU_SPE_STOP) > VCPU ioctl. > > - The requirement for userspace to mlock() the guest memory has been removed, > and now userspace can make changes to memory contents after the memory is > mapped at stage 2. > > - Better debugging of guest memory pinning by printing a warning when we > get an unexpected read or write fault. This helped me catch several bugs > during development, it has already proven very useful. Many thanks to > James who suggested when reviewing v3. > > > Missing features > ================ > > I've tried to keep the series as small as possible to make it easier to review, > while implementing the core functionality needed for the SPE emulation. As such, > I've chosen to not implement several features: > > - Host profiling a guest which has the SPE feature bit set (see open > questions). > > - No errata workarounds have been implemented yet, and there are quite a few of > them for Neoverse N1 and Neoverse V1. > > - Disabling CONFIG_NUMA_BALANCING is a hack to get KVM SPE to work and I am > investigating other ways to get around automatic numa balancing, like > requiring userspace to disable it via set_mempolicy(). I am also going to > look at how VFIO gets around it. Suggestions welcome. > > - There's plenty of room for optimization. Off the top of my head, using > block mappings at stage 2, batch pinning of pages (similar to what VFIO > does), optimize the way KVM keeps track of pinned pages (using a linked > list triples the memory usage), context-switch the SPE registers on > vcpu_load/vcpu_put on VHE if the host is not profiling, locking > optimizations, etc, etc. > > - ...and others. I'm sure I'm missing at least a few things which are > important for someone. > > > Known issues > ============ > > This is an RFC, so keep in mind that almost definitely there will be scary > bugs. For example, below is a list of known issues which don't affect the > correctness of the emulation, and which I'm planning to fix in a future > iteration: > > - With CONFIG_PROVE_LOCKING=y, lockdep complains about lock contention when > the VCPU executes the dcache clean pending ops. > > - With CONFIG_PROVE_LOCKING=y, KVM will hit a BUG at > kvm_lock_all_vcpus()->mutex_trylock(&vcpu->mutex) with more than 48 > VCPUs. > > This BUG statement can also be triggered with mainline. To reproduce it, > compile kvmtool from this branch [5] and follow the instruction in the > kvmtool commit message. > > One workaround could be to stop trying to lock all VCPUs when locking a > memslot and document the fact that it is required that no VCPUs are run > before the ioctl completes, otherwise bad things might happen to the VM. > > > Open questions > ============== > > 1. Implementing support for host profiling a guest with the SPE feature > means setting the profiling buffer owning regime to EL2. While that is in > effect, PMBIDR_EL1.P will equal 1. This has two consequences: if the guest > probes SPE during this time, the driver will fail; and the guest will be > able to determine when it is profiled. I see two options here: This doesn't mean the EL2 is owning the SPE. It only tells you that a higher level EL is owning the SPE. It could as well be EL3. (e.g, MDCR_EL3.NSPB == 0 or 1). So I think this is architecturally correct, as long as we trap the guest access to other SPE registers and inject and UNDEF. Thanks Suzuki _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B777C433EF for ; Wed, 22 Sep 2021 10:14:05 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 06E94611C0 for ; Wed, 22 Sep 2021 10:14:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 06E94611C0 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:Content-Type: Content-Transfer-Encoding:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:Date:Message-ID:From: References:To:Subject:Reply-To:Cc:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=baXGoan8jVbqy5ZFzCjR4WEmvuvUvollk7m93GOJThc=; b=PhxehKxQCX2lfqaLDxZu2TU/E3 pTI0E930DD62pTXEQuW/EGTH0TLRhCtPuHtBwLYlcptGjmuO4XBDSIfr/G6NC7Qx7ecdngGfZZY+f w1Ll/6n7a9g1K2RiryxhxXj4x/iT9rZlX0ebS5VmlkB+P2iiLwC6a6EI2nI28F1PzvQQXMEcXgGEQ syOb34US8257LR7gzO3MpPECisVAZ4wOSn2fWFeUesmQcWa482B9qsm1zaQkMKR0Q7gD3UkarodJc bEVxAMJMzaF5XRbe4N8HUefSu3EOwWA3SPoYazcmcKOqZcDLrOqJ2Apl8mNC4n2Na/iwRjHPzvPOV Lfkr5fGg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mSzEI-007h8b-10; Wed, 22 Sep 2021 10:11:54 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mSzED-007h82-F6 for linux-arm-kernel@lists.infradead.org; Wed, 22 Sep 2021 10:11:51 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AA1FF11B3; Wed, 22 Sep 2021 03:11:46 -0700 (PDT) Received: from [10.57.95.67] (unknown [10.57.95.67]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8F34A3F719; Wed, 22 Sep 2021 03:11:45 -0700 (PDT) Subject: Re: [RFC PATCH v4 00/39] KVM: arm64: Add Statistical Profiling Extension (SPE) support To: Alexandru Elisei , maz@kernel.org, james.morse@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org References: <20210825161815.266051-1-alexandru.elisei@arm.com> From: Suzuki K Poulose Message-ID: <963f68c8-b109-7ebb-751d-14ce46e3cdde@arm.com> Date: Wed, 22 Sep 2021 11:11:44 +0100 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Thunderbird/78.14.0 MIME-Version: 1.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> Content-Language: en-GB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210922_031149_656043_40E31B62 X-CRM114-Status: GOOD ( 37.98 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 25/08/2021 17:17, Alexandru Elisei wrote: > This is v4 of the SPE series posted at [1]. v2 can be found at [2], and the > original series at [3]. > > Statistical Profiling Extension (SPE) is an optional feature added in > ARMv8.2. It allows sampling at regular intervals of the operations executed > by the PE and storing a record of each operation in a memory buffer. A high > level overview of the extension is presented in an article on arm.com [4]. > > This is another complete rewrite of the series, and nothing is set in > stone. If you think of a better way to do things, please suggest it. > > > Features added > ============== > > The rewrite enabled me to add support for several features not > present in the previous iteration: > > - Support for heterogeneous systems, where only some of the CPUs support SPE. > This is accomplished via the KVM_ARM_VCPU_SUPPORTED_CPUS VCPU ioctl. > > - Support for VM migration with the KVM_ARM_VCPU_SPE_CTRL(KVM_ARM_VCPU_SPE_STOP) > VCPU ioctl. > > - The requirement for userspace to mlock() the guest memory has been removed, > and now userspace can make changes to memory contents after the memory is > mapped at stage 2. > > - Better debugging of guest memory pinning by printing a warning when we > get an unexpected read or write fault. This helped me catch several bugs > during development, it has already proven very useful. Many thanks to > James who suggested when reviewing v3. > > > Missing features > ================ > > I've tried to keep the series as small as possible to make it easier to review, > while implementing the core functionality needed for the SPE emulation. As such, > I've chosen to not implement several features: > > - Host profiling a guest which has the SPE feature bit set (see open > questions). > > - No errata workarounds have been implemented yet, and there are quite a few of > them for Neoverse N1 and Neoverse V1. > > - Disabling CONFIG_NUMA_BALANCING is a hack to get KVM SPE to work and I am > investigating other ways to get around automatic numa balancing, like > requiring userspace to disable it via set_mempolicy(). I am also going to > look at how VFIO gets around it. Suggestions welcome. > > - There's plenty of room for optimization. Off the top of my head, using > block mappings at stage 2, batch pinning of pages (similar to what VFIO > does), optimize the way KVM keeps track of pinned pages (using a linked > list triples the memory usage), context-switch the SPE registers on > vcpu_load/vcpu_put on VHE if the host is not profiling, locking > optimizations, etc, etc. > > - ...and others. I'm sure I'm missing at least a few things which are > important for someone. > > > Known issues > ============ > > This is an RFC, so keep in mind that almost definitely there will be scary > bugs. For example, below is a list of known issues which don't affect the > correctness of the emulation, and which I'm planning to fix in a future > iteration: > > - With CONFIG_PROVE_LOCKING=y, lockdep complains about lock contention when > the VCPU executes the dcache clean pending ops. > > - With CONFIG_PROVE_LOCKING=y, KVM will hit a BUG at > kvm_lock_all_vcpus()->mutex_trylock(&vcpu->mutex) with more than 48 > VCPUs. > > This BUG statement can also be triggered with mainline. To reproduce it, > compile kvmtool from this branch [5] and follow the instruction in the > kvmtool commit message. > > One workaround could be to stop trying to lock all VCPUs when locking a > memslot and document the fact that it is required that no VCPUs are run > before the ioctl completes, otherwise bad things might happen to the VM. > > > Open questions > ============== > > 1. Implementing support for host profiling a guest with the SPE feature > means setting the profiling buffer owning regime to EL2. While that is in > effect, PMBIDR_EL1.P will equal 1. This has two consequences: if the guest > probes SPE during this time, the driver will fail; and the guest will be > able to determine when it is profiled. I see two options here: This doesn't mean the EL2 is owning the SPE. It only tells you that a higher level EL is owning the SPE. It could as well be EL3. (e.g, MDCR_EL3.NSPB == 0 or 1). So I think this is architecturally correct, as long as we trap the guest access to other SPE registers and inject and UNDEF. Thanks Suzuki _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel