From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4398CC433F5 for ; Thu, 14 Oct 2021 06:32:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1E82B606A5 for ; Thu, 14 Oct 2021 06:32:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229617AbhJNGej (ORCPT ); Thu, 14 Oct 2021 02:34:39 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:48784 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229457AbhJNGei (ORCPT ); Thu, 14 Oct 2021 02:34:38 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1634193153; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=31s6XV9TstpPEqT0u/Y356vc681kPmYFJLgOI3/TapM=; b=cDEuLOCCAWPZ0BNbn3yxL6ZXjKDP/AKnRY/94Ewej+eDnWL9LpN6JQ5ElddTUTpTUQyRkT x4ktXeiWNkLx41/ljjwULcJnnU3tsuCvrxrQ6m7RYfdTKt4J7H5QD3IMuSKTGydUF2xFDC amDhroqaIYEqREbnstDLA4uUsedCN0I= Received: from mail-lf1-f72.google.com (mail-lf1-f72.google.com [209.85.167.72]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-203-5qkSNocUNHuu002oH1M2Hg-1; Thu, 14 Oct 2021 02:32:31 -0400 X-MC-Unique: 5qkSNocUNHuu002oH1M2Hg-1 Received: by mail-lf1-f72.google.com with SMTP id f17-20020a0565123b1100b003fda40b659aso3620641lfv.23 for ; Wed, 13 Oct 2021 23:32:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=31s6XV9TstpPEqT0u/Y356vc681kPmYFJLgOI3/TapM=; b=edbJ24BBLzjkCuXrHjIGo8nslP9NBxF4bRwW3G3Q51JweDMJZcBfm/IwgYUJQDuMM/ tToKOEmEhEghqDd70Jm337AUQYPqcRuPrKU9bfF51uC8xsO5Dp8v86hvQW09vBjuaFap Ww9KWE1Hp23XedI9AAO6h/vUQV8usoyBJ+QpJeT9DsFNEnt0dALz2h/HNXa0oU7DF8JZ omMl1WMJzH8ojk6MkONbIgnvoBaMbwjgNRqaacXZHZKGZYnTyFC6S2/GfRgwdWwvZgfJ 1aV2mhg51FOwoiXn/4bGXQd/JXo1L9ER5fZtRBeje/ArY/+R9DUBqCG/qCPB89yHgdga Vi8w== X-Gm-Message-State: AOAM533LXCCgeF6nhXE6JDEygy0e2orSpnGTU9Bp2nZnLQ1VAYOwmxfU D15uyIFEwMDsNRtomz4kadq83vhxO+GeDGL26ymcZ73EnKdb5mkacWAAY5khH6t5ZinF9onwohX v5m1dccfwTuiUA/dxsU+jIU0Eos6jXS7oRv0O0Nh8 X-Received: by 2002:a2e:5c8:: with SMTP id 191mr4214609ljf.107.1634193150265; Wed, 13 Oct 2021 23:32:30 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzDlSr4S7th26Srw5czAHLsCG4221sVATfAaUO4erkGDGb7IXMne9Hg9yZCoWNe7TprziQBbKtofGQzlkGjydE= X-Received: by 2002:a2e:5c8:: with SMTP id 191mr4214590ljf.107.1634193150044; Wed, 13 Oct 2021 23:32:30 -0700 (PDT) MIME-Version: 1.0 References: <20211012065227.9953-1-jasowang@redhat.com> <20211012065227.9953-8-jasowang@redhat.com> <20211013053627-mutt-send-email-mst@kernel.org> <20211014014551-mutt-send-email-mst@kernel.org> <20211014022438-mutt-send-email-mst@kernel.org> In-Reply-To: <20211014022438-mutt-send-email-mst@kernel.org> From: Jason Wang Date: Thu, 14 Oct 2021 14:32:19 +0800 Message-ID: Subject: Re: [PATCH V2 07/12] virtio-pci: harden INTX interrupts To: "Michael S. Tsirkin" Cc: virtualization , linux-kernel , "Hetzelt, Felicitas" , "kaplan, david" , Konrad Rzeszutek Wilk , Boqun Feng , Thomas Gleixner , Peter Zijlstra , "Paul E . McKenney" Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Oct 14, 2021 at 2:26 PM Michael S. Tsirkin wrote: > > On Thu, Oct 14, 2021 at 02:20:17PM +0800, Jason Wang wrote: > > On Thu, Oct 14, 2021 at 1:50 PM Michael S. Tsirkin wrote: > > > > > > On Thu, Oct 14, 2021 at 10:35:48AM +0800, Jason Wang wrote: > > > > On Wed, Oct 13, 2021 at 5:42 PM Michael S. Tsirkin wrote: > > > > > > > > > > On Tue, Oct 12, 2021 at 02:52:22PM +0800, Jason Wang wrote: > > > > > > This patch tries to make sure the virtio interrupt handler for INTX > > > > > > won't be called after a reset and before virtio_device_ready(). We > > > > > > can't use IRQF_NO_AUTOEN since we're using shared interrupt > > > > > > (IRQF_SHARED). So this patch tracks the INTX enabling status in a new > > > > > > intx_soft_enabled variable and toggle it during in > > > > > > vp_disable/enable_vectors(). The INTX interrupt handler will check > > > > > > intx_soft_enabled before processing the actual interrupt. > > > > > > > > > > > > Cc: Boqun Feng > > > > > > Cc: Thomas Gleixner > > > > > > Cc: Peter Zijlstra > > > > > > Cc: Paul E. McKenney > > > > > > Signed-off-by: Jason Wang > > > > > > --- > > > > > > drivers/virtio/virtio_pci_common.c | 24 ++++++++++++++++++++++-- > > > > > > drivers/virtio/virtio_pci_common.h | 1 + > > > > > > 2 files changed, 23 insertions(+), 2 deletions(-) > > > > > > > > > > > > diff --git a/drivers/virtio/virtio_pci_common.c b/drivers/virtio/virtio_pci_common.c > > > > > > index 0b9523e6dd39..5ae6a2a4eb77 100644 > > > > > > --- a/drivers/virtio/virtio_pci_common.c > > > > > > +++ b/drivers/virtio/virtio_pci_common.c > > > > > > @@ -30,8 +30,16 @@ void vp_disable_vectors(struct virtio_device *vdev) > > > > > > struct virtio_pci_device *vp_dev = to_vp_device(vdev); > > > > > > int i; > > > > > > > > > > > > - if (vp_dev->intx_enabled) > > > > > > + if (vp_dev->intx_enabled) { > > > > > > + /* > > > > > > + * The below synchronize() guarantees that any > > > > > > + * interrupt for this line arriving after > > > > > > + * synchronize_irq() has completed is guaranteed to see > > > > > > + * intx_soft_enabled == false. > > > > > > + */ > > > > > > + WRITE_ONCE(vp_dev->intx_soft_enabled, false); > > > > > > synchronize_irq(vp_dev->pci_dev->irq); > > > > > > + } > > > > > > > > > > > > for (i = 0; i < vp_dev->msix_vectors; ++i) > > > > > > disable_irq(pci_irq_vector(vp_dev->pci_dev, i)); > > > > > > @@ -43,8 +51,16 @@ void vp_enable_vectors(struct virtio_device *vdev) > > > > > > struct virtio_pci_device *vp_dev = to_vp_device(vdev); > > > > > > int i; > > > > > > > > > > > > - if (vp_dev->intx_enabled) > > > > > > + if (vp_dev->intx_enabled) { > > > > > > + disable_irq(vp_dev->pci_dev->irq); > > > > > > + /* > > > > > > + * The above disable_irq() provides TSO ordering and > > > > > > + * as such promotes the below store to store-release. > > > > > > + */ > > > > > > + WRITE_ONCE(vp_dev->intx_soft_enabled, true); > > > > > > + enable_irq(vp_dev->pci_dev->irq); > > > > > > return; > > > > > > + } > > > > > > > > > > > > for (i = 0; i < vp_dev->msix_vectors; ++i) > > > > > > enable_irq(pci_irq_vector(vp_dev->pci_dev, i)); > > > > > > @@ -97,6 +113,10 @@ static irqreturn_t vp_interrupt(int irq, void *opaque) > > > > > > struct virtio_pci_device *vp_dev = opaque; > > > > > > u8 isr; > > > > > > > > > > > > + /* read intx_soft_enabled before read others */ > > > > > > + if (!smp_load_acquire(&vp_dev->intx_soft_enabled)) > > > > > > + return IRQ_NONE; > > > > > > + > > > > > > /* reading the ISR has the effect of also clearing it so it's very > > > > > > * important to save off the value. */ > > > > > > isr = ioread8(vp_dev->isr); > > > > > > > > > > I don't see why we need this ordering guarantee here. > > > > > > > > > > synchronize_irq above makes sure no interrupt handler > > > > > is in progress. > > > > > > > > Yes. > > > > > > > > > the handler itself thus does not need > > > > > any specific order, it is ok if intx_soft_enabled is read > > > > > after, not before the rest of it. > > > > > > > > But the interrupt could be raised after synchronize_irq() which may > > > > see a false of the intx_soft_enabled. > > > > > > You mean a "true" value right? false is what we are writing there. > > > > I meant that we want to not go for stuff like vq->callback after the > > synchronize_irq() after setting intx_soft_enabled to false. Otherwise > > we may get unexpected results like use after free. Host can craft ISR > > in this case. > > > > > > Are you sure it can happen? I think that synchronize_irq makes the value > > > visible on all CPUs running the irq. > > > > Yes, so the false is visible by vp_interrupt(), we can't do the other > > task before we check intx_soft_enabled. > > But the order does not matter. synchronize_irq will make sure > everything is visible. Not the thing that happens after synchronize_irq(). E.g for remove_vq_common(): static void remove_vq_common(struct virtnet_info *vi) { vi->vdev->config->reset(vi->vdev); /* Free unused buffers in both send and recv, if any. */ free_unused_bufs(vi); free_receive_bufs(vi); free_receive_page_frags(vi); virtnet_del_vqs(vi); } The interrupt could be raised by the device after .reset(). Thanks > > > > > > > > In this case we still need the > > > > make sure intx_soft_enbled to be read first instead of allowing other > > > > operations to be done first, otherwise the intx_soft_enabled is > > > > meaningless. > > > > > > > > Thanks > > > > > > If intx_soft_enbled were not visible after synchronize_irq then > > > it does not matter in which order we read it wrt other values, > > > it still wouldn't work right. > > > > Yes. > > > > Thanks > > > We are agreed then? No need for a barrier here, READ_ONCE is enough? > > > > > > > > > > > > > > Just READ_ONCE should be enough, and we can drop the comment. > > > > > > > > > > > > > > > > diff --git a/drivers/virtio/virtio_pci_common.h b/drivers/virtio/virtio_pci_common.h > > > > > > index a235ce9ff6a5..3c06e0f92ee4 100644 > > > > > > --- a/drivers/virtio/virtio_pci_common.h > > > > > > +++ b/drivers/virtio/virtio_pci_common.h > > > > > > @@ -64,6 +64,7 @@ struct virtio_pci_device { > > > > > > /* MSI-X support */ > > > > > > int msix_enabled; > > > > > > int intx_enabled; > > > > > > + bool intx_soft_enabled; > > > > > > cpumask_var_t *msix_affinity_masks; > > > > > > /* Name strings for interrupts. This size should be enough, > > > > > > * and I'm too lazy to allocate each name separately. */ > > > > > > -- > > > > > > 2.25.1 > > > > > > > > >