Xen-Devel Archive on lore.kernel.org
 help / color / Atom feed
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Julien Grall <julien.grall@arm.com>,
	Paul Durrant <paul.durrant@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 7/7] ioreq: provide support for long-running operations...
Date: Wed, 21 Aug 2019 16:59:03 +0200
Message-ID: <20190821145903.45934-8-roger.pau@citrix.com> (raw)
In-Reply-To: <20190821145903.45934-1-roger.pau@citrix.com>

...and switch vPCI to use this infrastructure for long running
physmap modification operations.

This allows to get rid of the vPCI specific modifications done to
handle_hvm_io_completion and allows generalizing the support for
long-running operations to other internal ioreq servers. Such support
is implemented as a specific handler that can be registers by internal
ioreq servers and that will be called to check for pending work.
Returning true from this handler will prevent the vcpu from running
until the handler returns false.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/hvm/ioreq.c         | 55 ++++++++++++++++++++++++++++----
 xen/drivers/vpci/vpci.c          |  3 ++
 xen/include/asm-x86/hvm/domain.h |  1 +
 xen/include/asm-x86/hvm/ioreq.h  |  2 ++
 4 files changed, 55 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index b2582bd3a0..8e160a0a14 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -186,18 +186,29 @@ bool handle_hvm_io_completion(struct vcpu *v)
     enum hvm_io_completion io_completion;
     unsigned int id;
 
-    if ( has_vpci(d) && vpci_process_pending(v) )
-    {
-        raise_softirq(SCHEDULE_SOFTIRQ);
-        return false;
-    }
-
     FOR_EACH_IOREQ_SERVER(d, id, s)
     {
         struct hvm_ioreq_vcpu *sv;
 
         if ( s->internal )
+        {
+            if ( s->pending && s->pending(v) )
+            {
+                /*
+                 * Need to raise a scheduler irq in order to prevent the guest
+                 * vcpu from resuming execution.
+                 *
+                 * Note this is not required for external ioreq operations
+                 * because in that case the vcpu is marked as blocked, but this
+                 * cannot be done for long-running internal operations, since
+                 * it would prevent the vcpu from being scheduled and thus the
+                 * long running operation from finishing.
+                 */
+                raise_softirq(SCHEDULE_SOFTIRQ);
+                return false;
+            }
             continue;
+        }
 
         list_for_each_entry ( sv,
                               &s->ioreq_vcpu_list,
@@ -518,6 +529,38 @@ int hvm_add_ioreq_handler(struct domain *d, ioservid_t id,
     return rc;
 }
 
+int hvm_add_ioreq_pending_handler(struct domain *d, ioservid_t id,
+                                  bool (*pending)(struct vcpu *v))
+{
+    struct hvm_ioreq_server *s;
+    int rc = 0;
+
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+    s = get_ioreq_server(d, id);
+    if ( !s )
+    {
+        rc = -ENOENT;
+        goto out;
+    }
+    if ( !s->internal )
+    {
+        rc = -EINVAL;
+        goto out;
+    }
+    if ( s->pending != NULL )
+    {
+        rc = -EBUSY;
+        goto out;
+    }
+
+    s->pending = pending;
+
+ out:
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    return rc;
+}
+
 static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s,
                                     struct hvm_ioreq_vcpu *sv)
 {
diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
index 510e3ee771..54b0f31612 100644
--- a/xen/drivers/vpci/vpci.c
+++ b/xen/drivers/vpci/vpci.c
@@ -508,6 +508,9 @@ int vpci_register_ioreq(struct domain *d)
         return rc;
 
     rc = hvm_add_ioreq_handler(d, id, ioreq_handler);
+    if ( rc )
+        return rc;
+    rc = hvm_add_ioreq_pending_handler(d, id, vpci_process_pending);
     if ( rc )
         return rc;
 
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index f0be303517..80a38ffe48 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -73,6 +73,7 @@ struct hvm_ioreq_server {
         };
         struct {
             int (*handler)(struct vcpu *v, ioreq_t *);
+            bool (*pending)(struct vcpu *v);
         };
     };
 };
diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h
index 10b9586885..cc3e27d059 100644
--- a/xen/include/asm-x86/hvm/ioreq.h
+++ b/xen/include/asm-x86/hvm/ioreq.h
@@ -57,6 +57,8 @@ void hvm_ioreq_init(struct domain *d);
 
 int hvm_add_ioreq_handler(struct domain *d, ioservid_t id,
                           int (*handler)(struct vcpu *v, ioreq_t *));
+int hvm_add_ioreq_pending_handler(struct domain *d, ioservid_t id,
+                                  bool (*pending)(struct vcpu *v));
 
 int hvm_ioreq_register_mmcfg(struct domain *d, paddr_t addr,
                              unsigned int start_bus, unsigned int end_bus,
-- 
2.22.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  parent reply index

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-21 14:58 [Xen-devel] [PATCH 0/7] ioreq: add support for internal servers Roger Pau Monne
2019-08-21 14:58 ` [Xen-devel] [PATCH 1/7] ioreq: add fields to allow internal ioreq servers Roger Pau Monne
2019-08-21 14:58 ` [Xen-devel] [PATCH 2/7] ioreq: add internal ioreq initialization support Roger Pau Monne
2019-08-21 16:24   ` Paul Durrant
2019-08-22  7:23     ` Roger Pau Monné
2019-08-22  8:30       ` Paul Durrant
2019-08-21 14:58 ` [Xen-devel] [PATCH 3/7] ioreq: allow dispatching ioreqs to internal servers Roger Pau Monne
2019-08-21 16:29   ` Paul Durrant
2019-08-22  7:40     ` Roger Pau Monné
2019-08-22  8:33       ` Paul Durrant
2019-08-21 14:59 ` [Xen-devel] [PATCH 4/7] ioreq: allow registering internal ioreq server handler Roger Pau Monne
2019-08-21 16:35   ` Paul Durrant
2019-08-22  7:43     ` Roger Pau Monné
2019-08-22  8:38       ` Paul Durrant
2019-08-21 14:59 ` [Xen-devel] [PATCH 5/7] ioreq: allow decoding accesses to MMCFG regions Roger Pau Monne
2019-08-21 14:59 ` [Xen-devel] [PATCH 6/7] vpci: register as an internal ioreq server Roger Pau Monne
2019-08-21 14:59 ` Roger Pau Monne [this message]
2019-08-22  9:15   ` [Xen-devel] [PATCH 7/7] ioreq: provide support for long-running operations Paul Durrant
2019-08-22 12:55     ` Roger Pau Monné
2019-08-22 13:07       ` Paul Durrant

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190821145903.45934-8-roger.pau@citrix.com \
    --to=roger.pau@citrix.com \
    --cc=George.Dunlap@eu.citrix.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=jbeulich@suse.com \
    --cc=julien.grall@arm.com \
    --cc=konrad.wilk@oracle.com \
    --cc=paul.durrant@citrix.com \
    --cc=sstabellini@kernel.org \
    --cc=tim@xen.org \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Xen-Devel Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/xen-devel/0 xen-devel/git/0.git
	git clone --mirror https://lore.kernel.org/xen-devel/1 xen-devel/git/1.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 xen-devel xen-devel/ https://lore.kernel.org/xen-devel \
		xen-devel@lists.xenproject.org xen-devel@lists.xen.org xen-devel@archiver.kernel.org
	public-inbox-index xen-devel

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.xenproject.lists.xen-devel


AGPL code for this site: git clone https://public-inbox.org/ public-inbox