From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4C49C433EF for ; Wed, 17 Nov 2021 17:53:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 90B23613A4 for ; Wed, 17 Nov 2021 17:53:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239736AbhKQR4N (ORCPT ); Wed, 17 Nov 2021 12:56:13 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:40926 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239723AbhKQR4J (ORCPT ); Wed, 17 Nov 2021 12:56:09 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1637171589; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gJytS6TYFHN4I+aQ3soLUIsII9pfQ1lCuaR2wCc+nmA=; b=UePPkAVx6IWmtYKr0qjKTv3I/gEgo4tNRKvSol+tYn7+JL0pXdako0lBGr9ZHEpKJ13vJZ cM4MJ4a7JIYYVtdw/2BU4c7gNooRFkqfoS+XZ+ItO686yXAnbF0KsKKicKNQy2NPMotALp HPRFC+ctLVZtRlEi+hrG8t7HlfKFBnE= Received: from mail-oi1-f197.google.com (mail-oi1-f197.google.com [209.85.167.197]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-583--ru-KGCjORi5hgYhSuCjRQ-1; Wed, 17 Nov 2021 12:53:06 -0500 X-MC-Unique: -ru-KGCjORi5hgYhSuCjRQ-1 Received: by mail-oi1-f197.google.com with SMTP id w127-20020aca6285000000b002a860b76aa8so2464477oib.11 for ; Wed, 17 Nov 2021 09:53:06 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=gJytS6TYFHN4I+aQ3soLUIsII9pfQ1lCuaR2wCc+nmA=; b=r4+vYFDVaE/jSIaFduO5IZoOfZsFbGNMimCmdgo3TSsir/qiNw6Bv3KGmEe7mC9JM6 imHx0ufcea4hfm7l57bVg9pMMPlelOHKRRRhFIPm1AnDVey6IioKXMSyWVr4fMp7j1rb DSnTtT7jFucVgwkgcnJN8BmPok5XKvq5bzhMTWwG+FUk9t9mcz4JbUgSZe89t3b/2Ue/ czJ4ZMKKYUu8rpvjwRjCauGGdBKcwuHz2kbxPnk4dvsUBvEc5wvz3/Q7v80uRmDM4y4Z OKLajcFOri+XpstAkymTS9SOQ00n3UiTF9wnS1hvlqk1ZNYwngYD4oKDJkLNnl0c0zW4 lTEQ== X-Gm-Message-State: AOAM532nkR9ptHwsEkYQoRdq0lIUZuh3m1o7UWKViqgBIaCQDy/T/KXl dmY1RhHSVwRA9XJcfeke/m8WoG5wOJrDcUotAOm5x315slC+606dX5qJk4vD17SH89n7+6vxL+e BMXw2KfVAaXELGprH5jyA8eF/ X-Received: by 2002:a05:6808:1a28:: with SMTP id bk40mr1509998oib.26.1637171585608; Wed, 17 Nov 2021 09:53:05 -0800 (PST) X-Google-Smtp-Source: ABdhPJxvVTGySK8TkO0LyUP88KI2yd/gF5/VvgPEcDRUR4+W9KnRBCZYQT5O53ScMLK1y4UsVCJ9TA== X-Received: by 2002:a05:6808:1a28:: with SMTP id bk40mr1509973oib.26.1637171585383; Wed, 17 Nov 2021 09:53:05 -0800 (PST) Received: from redhat.com ([38.15.36.239]) by smtp.gmail.com with ESMTPSA id q33sm72265ooh.16.2021.11.17.09.53.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Nov 2021 09:53:05 -0800 (PST) Date: Wed, 17 Nov 2021 10:53:04 -0700 From: Alex Williamson To: Cc: , Cornelia Huck , Max Gurtovoy , Yishai Hadas , Zhen Lei , Jason Gunthorpe , Subject: Re: [RFC 2/3] vfio/pci: virtualize PME related registers bits and initialize to zero Message-ID: <20211117105304.5f9f9d72.alex.williamson@redhat.com> In-Reply-To: <20211115133640.2231-3-abhsahu@nvidia.com> References: <20211115133640.2231-1-abhsahu@nvidia.com> <20211115133640.2231-3-abhsahu@nvidia.com> X-Mailer: Claws Mail 3.18.0 (GTK+ 2.24.33; x86_64-redhat-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 15 Nov 2021 19:06:39 +0530 wrote: > From: Abhishek Sahu > > If any PME event will be generated by PCI, then it will be mostly > handled in the host by the root port PME code. For example, in the case > of PCIe, the PME event will be sent to the root port and then the PME > interrupt will be generated. This will be handled in > drivers/pci/pcie/pme.c at the host side. Inside this, the > pci_check_pme_status() will be called where PME_Status and PME_En bits > will be cleared. So, the guest OS which is using vfio-pci device will > not come to know about this PME event. > > To handle these PME events inside guests, we need some framework so > that if any PME events will happen, then it needs to be forwarded to > virtual machine monitor. We can virtualize PME related registers bits > and initialize these bits to zero so vfio-pci device user will assume > that it is not capable of asserting the PME# signal from any power state. > > Signed-off-by: Abhishek Sahu > --- > drivers/vfio/pci/vfio_pci_config.c | 32 +++++++++++++++++++++++++++++- > 1 file changed, 31 insertions(+), 1 deletion(-) > > diff --git a/drivers/vfio/pci/vfio_pci_config.c b/drivers/vfio/pci/vfio_pci_config.c > index 6e58b4bf7a60..fb3a503a5b99 100644 > --- a/drivers/vfio/pci/vfio_pci_config.c > +++ b/drivers/vfio/pci/vfio_pci_config.c > @@ -738,12 +738,27 @@ static int __init init_pci_cap_pm_perm(struct perm_bits *perm) > */ > p_setb(perm, PCI_CAP_LIST_NEXT, (u8)ALL_VIRT, NO_WRITE); > > + /* > + * The guests can't process PME events. If any PME event will be > + * generated, then it will be mostly handled in the host and the > + * host will clear the PME_STATUS. So virtualize PME_Support bits. > + * It will be initialized to zero later on. > + */ > + p_setw(perm, PCI_PM_PMC, PCI_PM_CAP_PME_MASK, NO_WRITE); > + > /* > * Power management is defined *per function*, so we can let > * the user change power state, but we trap and initiate the > * change ourselves, so the state bits are read-only. > + * > + * The guest can't process PME from D3cold so virtualize PME_Status > + * and PME_En bits. It will be initialized to zero later on. > */ > - p_setd(perm, PCI_PM_CTRL, NO_VIRT, ~PCI_PM_CTRL_STATE_MASK); > + p_setd(perm, PCI_PM_CTRL, > + PCI_PM_CTRL_PME_ENABLE | PCI_PM_CTRL_PME_STATUS, > + ~(PCI_PM_CTRL_PME_ENABLE | PCI_PM_CTRL_PME_STATUS | > + PCI_PM_CTRL_STATE_MASK)); > + > return 0; > } > > @@ -1412,6 +1427,18 @@ static int vfio_ext_cap_len(struct vfio_pci_core_device *vdev, u16 ecap, u16 epo > return 0; > } > > +static void vfio_update_pm_vconfig_bytes(struct vfio_pci_core_device *vdev, > + int offset) > +{ > + /* initialize virtualized PME_Support bits to zero */ > + *(__le16 *)&vdev->vconfig[offset + PCI_PM_PMC] &= > + ~cpu_to_le16(PCI_PM_CAP_PME_MASK); > + > + /* initialize virtualized PME_Status and PME_En bits to zero */ ^ Extra space here and above. > + *(__le16 *)&vdev->vconfig[offset + PCI_PM_CTRL] &= > + ~cpu_to_le16(PCI_PM_CTRL_PME_ENABLE | PCI_PM_CTRL_PME_STATUS); Perhaps more readable and consistent with elsewhere as: __le16 *pmc = (__le16 *)&vdev->vconfig[offset + PCI_PM_PMC]; __le16 *ctrl = (__le16 *)&vdev->vconfig[offset + PCI_PM_CTRL]; /* Clear vconfig PME_Support, PME_Status, and PME_En bits */ *pmc &= ~cpu_to_le16(PCI_PM_CAP_PME_MASK); *ctrl &= ~cpu_to_le16(PCI_PM_CTRL_PME_ENABLE | PCI_PM_CTRL_PME_STATUS); Thanks, Alex > +} > + > static int vfio_fill_vconfig_bytes(struct vfio_pci_core_device *vdev, > int offset, int size) > { > @@ -1535,6 +1562,9 @@ static int vfio_cap_init(struct vfio_pci_core_device *vdev) > if (ret) > return ret; > > + if (cap == PCI_CAP_ID_PM) > + vfio_update_pm_vconfig_bytes(vdev, pos); > + > prev = &vdev->vconfig[pos + PCI_CAP_LIST_NEXT]; > pos = next; > caps++;