All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] ioreq: make use of 'legacy' GFNs
@ 2018-10-05 13:43 Paul Durrant
  2018-10-05 13:43 ` [PATCH 1/2] x86/hvm: make sure HVM_PARAM_[BUF]IOREQ_PFN can only be set once Paul Durrant
  2018-10-05 13:43 ` [PATCH 2/2] x86/hvm/ioreq: allow ioreq servers to use HVM_PARAM_[BUF]IOREQ_PFN Paul Durrant
  0 siblings, 2 replies; 10+ messages in thread
From: Paul Durrant @ 2018-10-05 13:43 UTC (permalink / raw)
  To: xen-devel; +Cc: Paul Durrant

Paul Durrant (2):
  x86/hvm: make sure HVM_PARAM_[BUF]IOREQ_PFN can only be set once
  x86/hvm/ioreq: allow ioreq servers to use HVM_PARAM_[BUF]IOREQ_PFN

 xen/arch/x86/hvm/hvm.c           |  2 ++
 xen/arch/x86/hvm/ioreq.c         | 50 ++++++++++++++++++++++++++++++++++++++--
 xen/include/asm-x86/hvm/domain.h |  3 ++-
 3 files changed, 52 insertions(+), 3 deletions(-)

-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/2] x86/hvm: make sure HVM_PARAM_[BUF]IOREQ_PFN can only be set once
  2018-10-05 13:43 [PATCH 0/2] ioreq: make use of 'legacy' GFNs Paul Durrant
@ 2018-10-05 13:43 ` Paul Durrant
  2018-10-08 13:16   ` Jan Beulich
  2018-10-08 13:20   ` Jan Beulich
  2018-10-05 13:43 ` [PATCH 2/2] x86/hvm/ioreq: allow ioreq servers to use HVM_PARAM_[BUF]IOREQ_PFN Paul Durrant
  1 sibling, 2 replies; 10+ messages in thread
From: Paul Durrant @ 2018-10-05 13:43 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Jan Beulich

These parameters should have always been in the 'set once' category
but this has, so far, not been enforced.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/hvm/hvm.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 51fc3ec07f..07ec20fb52 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4083,6 +4083,8 @@ static int hvm_allow_set_param(struct domain *d,
     {
     /* The following parameters should only be changed once. */
     case HVM_PARAM_VIRIDIAN:
+    case HVM_PARAM_IOREQ_PFN:
+    case HVM_PARAM_BUFIOREQ_PFN:
     case HVM_PARAM_IOREQ_SERVER_PFN:
     case HVM_PARAM_NR_IOREQ_SERVER_PAGES:
     case HVM_PARAM_ALTP2M:
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/2] x86/hvm/ioreq: allow ioreq servers to use HVM_PARAM_[BUF]IOREQ_PFN
  2018-10-05 13:43 [PATCH 0/2] ioreq: make use of 'legacy' GFNs Paul Durrant
  2018-10-05 13:43 ` [PATCH 1/2] x86/hvm: make sure HVM_PARAM_[BUF]IOREQ_PFN can only be set once Paul Durrant
@ 2018-10-05 13:43 ` Paul Durrant
  2018-10-08 13:29   ` Jan Beulich
  1 sibling, 1 reply; 10+ messages in thread
From: Paul Durrant @ 2018-10-05 13:43 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Jan Beulich

Since commit 2c257bd6 "x86/hvm: remove default ioreq server (again)" the
GFNs allocated by the toolstack and set in HVM_PARAM_IOREQ_PFN and
HVM_PARAM_BUFIOREQ_PFN have been unused. This patch allows them to be used
by (non-default) ioreq servers.

NOTE: This fixes a compatibility issue. A guest created on a version of
      Xen that pre-dates the initial ioreq server implementation and then
      migrated in will currently fail to resume because its migration
      stream will lack values for HVM_PARAM_IOREQ_SERVER_PFN and
      HVM_PARAM_NR_IOREQ_SERVER_PAGES *unless* the system has an
      emulator domain that uses direct resource mapping (which depends
      on the version of privcmd it happens to have) in which case it
      will not require use of GFNs for the ioreq server shared
      pages.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>

A similar compatibility issue with migrated-in VMs exists with Xen 4.11
because the upstream QEMU fall-back to use legacy ioreq server was broken
when direct resource mapping was introduced.
This is because, prior to the resource mapping patches, it was the creation
of the non-default ioreq server that failed if GFNs were not available
whereas, as of 4.11, it is retrieval of the info that fails which does not
trigger the fall-back.
---
 xen/arch/x86/hvm/ioreq.c         | 50 ++++++++++++++++++++++++++++++++++++++--
 xen/include/asm-x86/hvm/domain.h |  3 ++-
 2 files changed, 50 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index 3569beaad5..4bac0a100c 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -237,6 +237,26 @@ bool handle_hvm_io_completion(struct vcpu *v)
     return true;
 }
 
+static gfn_t hvm_alloc_legacy_ioreq_gfn(struct hvm_ioreq_server *s)
+{
+    struct domain *d = s->target;
+    unsigned int i;
+
+    BUILD_BUG_ON(HVM_PARAM_IOREQ_PFN >
+                 sizeof(d->arch.hvm.ioreq_gfn.legacy_mask) * 8);
+    BUILD_BUG_ON(HVM_PARAM_BUFIOREQ_PFN >
+                 sizeof(d->arch.hvm.ioreq_gfn.legacy_mask) * 8);
+    BUILD_BUG_ON(HVM_PARAM_BUFIOREQ_PFN != HVM_PARAM_IOREQ_PFN + 1);
+
+    for ( i = HVM_PARAM_IOREQ_PFN; i <= HVM_PARAM_BUFIOREQ_PFN; i++ )
+    {
+        if ( !test_and_set_bit(i, &d->arch.hvm.ioreq_gfn.legacy_mask) )
+            return _gfn(d->arch.hvm.params[i]);
+    }
+
+    return INVALID_GFN;
+}
+
 static gfn_t hvm_alloc_ioreq_gfn(struct hvm_ioreq_server *s)
 {
     struct domain *d = s->target;
@@ -248,7 +268,29 @@ static gfn_t hvm_alloc_ioreq_gfn(struct hvm_ioreq_server *s)
             return _gfn(d->arch.hvm.ioreq_gfn.base + i);
     }
 
-    return INVALID_GFN;
+    /*
+     * If we are out of 'normal' GFNs then we may still have a 'legacy'
+     * GFN available.
+     */
+    return hvm_alloc_legacy_ioreq_gfn(s);
+}
+
+static bool hvm_free_legacy_ioreq_gfn(struct hvm_ioreq_server *s,
+                                      gfn_t gfn)
+{
+    struct domain *d = s->target;
+    unsigned int i;
+
+    for ( i = HVM_PARAM_IOREQ_PFN; i <= HVM_PARAM_BUFIOREQ_PFN; i++ )
+    {
+        if ( gfn_eq(gfn, _gfn(d->arch.hvm.params[i])) )
+             break;
+    }
+    if ( i > HVM_PARAM_BUFIOREQ_PFN )
+        return false;
+
+    clear_bit(i, &d->arch.hvm.ioreq_gfn.legacy_mask);
+    return true;
 }
 
 static void hvm_free_ioreq_gfn(struct hvm_ioreq_server *s, gfn_t gfn)
@@ -258,7 +300,11 @@ static void hvm_free_ioreq_gfn(struct hvm_ioreq_server *s, gfn_t gfn)
 
     ASSERT(!gfn_eq(gfn, INVALID_GFN));
 
-    set_bit(i, &d->arch.hvm.ioreq_gfn.mask);
+    if ( !hvm_free_legacy_ioreq_gfn(s, gfn) )
+    {
+        ASSERT(i < sizeof(d->arch.hvm.ioreq_gfn.mask) * 8);
+        set_bit(i, &d->arch.hvm.ioreq_gfn.mask);
+    }
 }
 
 static void hvm_unmap_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index 80b2ab041e..a9f68d9571 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -95,7 +95,8 @@ struct hvm_domain {
     /* Guest page range used for non-default ioreq servers */
     struct {
         unsigned long base;
-        unsigned long mask;
+        unsigned long mask; /* clear to allocate */
+        unsigned long legacy_mask; /* set to allocate */
     } ioreq_gfn;
 
     /* Lock protects all other values in the sub-struct and the default */
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/2] x86/hvm: make sure HVM_PARAM_[BUF]IOREQ_PFN can only be set once
  2018-10-05 13:43 ` [PATCH 1/2] x86/hvm: make sure HVM_PARAM_[BUF]IOREQ_PFN can only be set once Paul Durrant
@ 2018-10-08 13:16   ` Jan Beulich
  2018-10-08 13:20   ` Jan Beulich
  1 sibling, 0 replies; 10+ messages in thread
From: Jan Beulich @ 2018-10-08 13:16 UTC (permalink / raw)
  To: Paul Durrant; +Cc: Andrew Cooper, Wei Liu, xen-devel

>>> On 05.10.18 at 15:43, <paul.durrant@citrix.com> wrote:
> These parameters should have always been in the 'set once' category
> but this has, so far, not been enforced.

But now that we're not even handling these anymore, why is there a
need to start doing so? If anything wouldn't it be better to add them
to the deprecated group in that same function (and maybe also its
"get" counterpart)?

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/2] x86/hvm: make sure HVM_PARAM_[BUF]IOREQ_PFN can only be set once
  2018-10-05 13:43 ` [PATCH 1/2] x86/hvm: make sure HVM_PARAM_[BUF]IOREQ_PFN can only be set once Paul Durrant
  2018-10-08 13:16   ` Jan Beulich
@ 2018-10-08 13:20   ` Jan Beulich
  2018-10-08 13:21     ` Paul Durrant
  1 sibling, 1 reply; 10+ messages in thread
From: Jan Beulich @ 2018-10-08 13:20 UTC (permalink / raw)
  To: Paul Durrant; +Cc: Andrew Cooper, Wei Liu, xen-devel

>>> On 05.10.18 at 15:43, <paul.durrant@citrix.com> wrote:
> These parameters should have always been in the 'set once' category
> but this has, so far, not been enforced.

Hmm, now that I'm looking at patch 2 I see where this is coming from,
but a hint towards this here would have helped, if this is to be a
separate patch (folding into the second patch with a respective
remark would perhaps have been better anyway).

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/2] x86/hvm: make sure HVM_PARAM_[BUF]IOREQ_PFN can only be set once
  2018-10-08 13:20   ` Jan Beulich
@ 2018-10-08 13:21     ` Paul Durrant
  0 siblings, 0 replies; 10+ messages in thread
From: Paul Durrant @ 2018-10-08 13:21 UTC (permalink / raw)
  To: 'Jan Beulich'; +Cc: Andrew Cooper, Wei Liu, xen-devel

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 08 October 2018 14:20
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Wei Liu
> <wei.liu2@citrix.com>; xen-devel <xen-devel@lists.xenproject.org>
> Subject: Re: [PATCH 1/2] x86/hvm: make sure HVM_PARAM_[BUF]IOREQ_PFN can
> only be set once
> 
> >>> On 05.10.18 at 15:43, <paul.durrant@citrix.com> wrote:
> > These parameters should have always been in the 'set once' category
> > but this has, so far, not been enforced.
> 
> Hmm, now that I'm looking at patch 2 I see where this is coming from,
> but a hint towards this here would have helped, if this is to be a
> separate patch (folding into the second patch with a respective
> remark would perhaps have been better anyway).
> 

Ok. I'll squash them together if I need to do a v2.

  Paul

> Jan
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/2] x86/hvm/ioreq: allow ioreq servers to use HVM_PARAM_[BUF]IOREQ_PFN
  2018-10-05 13:43 ` [PATCH 2/2] x86/hvm/ioreq: allow ioreq servers to use HVM_PARAM_[BUF]IOREQ_PFN Paul Durrant
@ 2018-10-08 13:29   ` Jan Beulich
  2018-10-08 14:38     ` Paul Durrant
  0 siblings, 1 reply; 10+ messages in thread
From: Jan Beulich @ 2018-10-08 13:29 UTC (permalink / raw)
  To: Paul Durrant; +Cc: Andrew Cooper, Wei Liu, xen-devel

>>> On 05.10.18 at 15:43, <paul.durrant@citrix.com> wrote:
> Since commit 2c257bd6 "x86/hvm: remove default ioreq server (again)" the
> GFNs allocated by the toolstack and set in HVM_PARAM_IOREQ_PFN and
> HVM_PARAM_BUFIOREQ_PFN have been unused. This patch allows them to be used
> by (non-default) ioreq servers.
> 
> NOTE: This fixes a compatibility issue. A guest created on a version of
>       Xen that pre-dates the initial ioreq server implementation and then
>       migrated in will currently fail to resume because its migration
>       stream will lack values for HVM_PARAM_IOREQ_SERVER_PFN and
>       HVM_PARAM_NR_IOREQ_SERVER_PAGES *unless* the system has an
>       emulator domain that uses direct resource mapping (which depends
>       on the version of privcmd it happens to have) in which case it
>       will not require use of GFNs for the ioreq server shared
>       pages.

Meaning this wants to be backported till where?

> A similar compatibility issue with migrated-in VMs exists with Xen 4.11
> because the upstream QEMU fall-back to use legacy ioreq server was broken
> when direct resource mapping was introduced.
> This is because, prior to the resource mapping patches, it was the creation
> of the non-default ioreq server that failed if GFNs were not available
> whereas, as of 4.11, it is retrieval of the info that fails which does not
> trigger the fall-back.

Are you working on a fix or workaround for this, too, then?

> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -237,6 +237,26 @@ bool handle_hvm_io_completion(struct vcpu *v)
>      return true;
>  }
>  
> +static gfn_t hvm_alloc_legacy_ioreq_gfn(struct hvm_ioreq_server *s)
> +{
> +    struct domain *d = s->target;
> +    unsigned int i;
> +
> +    BUILD_BUG_ON(HVM_PARAM_IOREQ_PFN >
> +                 sizeof(d->arch.hvm.ioreq_gfn.legacy_mask) * 8);
> +    BUILD_BUG_ON(HVM_PARAM_BUFIOREQ_PFN >
> +                 sizeof(d->arch.hvm.ioreq_gfn.legacy_mask) * 8);
> +    BUILD_BUG_ON(HVM_PARAM_BUFIOREQ_PFN != HVM_PARAM_IOREQ_PFN + 1);
> +
> +    for ( i = HVM_PARAM_IOREQ_PFN; i <= HVM_PARAM_BUFIOREQ_PFN; i++ )
> +    {
> +        if ( !test_and_set_bit(i, &d->arch.hvm.ioreq_gfn.legacy_mask) )
> +            return _gfn(d->arch.hvm.params[i]);

Aren't you risking to use GFN 0 here, if the param was never set?
Since in theory GFN 0 could be valid here, perhaps whether these
MFNs may be used should be tracked by starting out legacy_mask
such that allocations are impossible, while the setting of the
params would then make the slots available for allocating from?

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/2] x86/hvm/ioreq: allow ioreq servers to use HVM_PARAM_[BUF]IOREQ_PFN
  2018-10-08 13:29   ` Jan Beulich
@ 2018-10-08 14:38     ` Paul Durrant
  2018-10-08 14:59       ` Jan Beulich
  0 siblings, 1 reply; 10+ messages in thread
From: Paul Durrant @ 2018-10-08 14:38 UTC (permalink / raw)
  To: 'Jan Beulich'; +Cc: Andrew Cooper, Wei Liu, xen-devel

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 08 October 2018 14:29
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Wei Liu
> <wei.liu2@citrix.com>; xen-devel <xen-devel@lists.xenproject.org>
> Subject: Re: [PATCH 2/2] x86/hvm/ioreq: allow ioreq servers to use
> HVM_PARAM_[BUF]IOREQ_PFN
> 
> >>> On 05.10.18 at 15:43, <paul.durrant@citrix.com> wrote:
> > Since commit 2c257bd6 "x86/hvm: remove default ioreq server (again)" the
> > GFNs allocated by the toolstack and set in HVM_PARAM_IOREQ_PFN and
> > HVM_PARAM_BUFIOREQ_PFN have been unused. This patch allows them to be
> used
> > by (non-default) ioreq servers.
> >
> > NOTE: This fixes a compatibility issue. A guest created on a version of
> >       Xen that pre-dates the initial ioreq server implementation and
> then
> >       migrated in will currently fail to resume because its migration
> >       stream will lack values for HVM_PARAM_IOREQ_SERVER_PFN and
> >       HVM_PARAM_NR_IOREQ_SERVER_PAGES *unless* the system has an
> >       emulator domain that uses direct resource mapping (which depends
> >       on the version of privcmd it happens to have) in which case it
> >       will not require use of GFNs for the ioreq server shared
> >       pages.
> 
> Meaning this wants to be backported till where?
> 

This fix is 4.12 specific because it is predicated on removal of default ioreq server support.

> > A similar compatibility issue with migrated-in VMs exists with Xen 4.11
> > because the upstream QEMU fall-back to use legacy ioreq server was
> broken
> > when direct resource mapping was introduced.
> > This is because, prior to the resource mapping patches, it was the
> creation
> > of the non-default ioreq server that failed if GFNs were not available
> > whereas, as of 4.11, it is retrieval of the info that fails which does
> not
> > trigger the fall-back.
> 
> Are you working on a fix or workaround for this, too, then?
> 

Not yet. I'm not sure how to approach this. There are a few options:

1. Backport default IOREQ server removal and this fix
2. Do a bespoke 4.11 fix that forces IOREQ server creation to fail if there are no GFNs available, thus triggering the default IOREQ server fallback in QEMU.
3. Upstream a fix into QEMU to do a fallback at the point that it fails to get GFNs i.e. have it close its IOREQ server and then fall back to default.

The problem with 1 is that it breaks qemu trad. 2 is probably simplest, but if the emualator can do resource mapping it is unnecessary. 3 is probably best, but it's not our fix to deliver.

Thoughts?

 Paul


> > --- a/xen/arch/x86/hvm/ioreq.c
> > +++ b/xen/arch/x86/hvm/ioreq.c
> > @@ -237,6 +237,26 @@ bool handle_hvm_io_completion(struct vcpu *v)
> >      return true;
> >  }
> >
> > +static gfn_t hvm_alloc_legacy_ioreq_gfn(struct hvm_ioreq_server *s)
> > +{
> > +    struct domain *d = s->target;
> > +    unsigned int i;
> > +
> > +    BUILD_BUG_ON(HVM_PARAM_IOREQ_PFN >
> > +                 sizeof(d->arch.hvm.ioreq_gfn.legacy_mask) * 8);
> > +    BUILD_BUG_ON(HVM_PARAM_BUFIOREQ_PFN >
> > +                 sizeof(d->arch.hvm.ioreq_gfn.legacy_mask) * 8);
> > +    BUILD_BUG_ON(HVM_PARAM_BUFIOREQ_PFN != HVM_PARAM_IOREQ_PFN + 1);
> > +
> > +    for ( i = HVM_PARAM_IOREQ_PFN; i <= HVM_PARAM_BUFIOREQ_PFN; i++ )
> > +    {
> > +        if ( !test_and_set_bit(i, &d->arch.hvm.ioreq_gfn.legacy_mask) )
> > +            return _gfn(d->arch.hvm.params[i]);
> 
> Aren't you risking to use GFN 0 here, if the param was never set?
> Since in theory GFN 0 could be valid here, perhaps whether these
> MFNs may be used should be tracked by starting out legacy_mask
> such that allocations are impossible, while the setting of the
> params would then make the slots available for allocating from?
> 
> Jan
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/2] x86/hvm/ioreq: allow ioreq servers to use HVM_PARAM_[BUF]IOREQ_PFN
  2018-10-08 14:38     ` Paul Durrant
@ 2018-10-08 14:59       ` Jan Beulich
  2018-10-08 15:05         ` Paul Durrant
  0 siblings, 1 reply; 10+ messages in thread
From: Jan Beulich @ 2018-10-08 14:59 UTC (permalink / raw)
  To: Paul Durrant; +Cc: Andrew Cooper, Wei Liu, xen-devel

>>> On 08.10.18 at 16:38, <Paul.Durrant@citrix.com> wrote:
>>  -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: 08 October 2018 14:29
>> To: Paul Durrant <Paul.Durrant@citrix.com>
>> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Wei Liu
>> <wei.liu2@citrix.com>; xen-devel <xen-devel@lists.xenproject.org>
>> Subject: Re: [PATCH 2/2] x86/hvm/ioreq: allow ioreq servers to use
>> HVM_PARAM_[BUF]IOREQ_PFN
>> 
>> >>> On 05.10.18 at 15:43, <paul.durrant@citrix.com> wrote:
>> > Since commit 2c257bd6 "x86/hvm: remove default ioreq server (again)" the
>> > GFNs allocated by the toolstack and set in HVM_PARAM_IOREQ_PFN and
>> > HVM_PARAM_BUFIOREQ_PFN have been unused. This patch allows them to be
>> used
>> > by (non-default) ioreq servers.
>> >
>> > NOTE: This fixes a compatibility issue. A guest created on a version of
>> >       Xen that pre-dates the initial ioreq server implementation and
>> then
>> >       migrated in will currently fail to resume because its migration
>> >       stream will lack values for HVM_PARAM_IOREQ_SERVER_PFN and
>> >       HVM_PARAM_NR_IOREQ_SERVER_PAGES *unless* the system has an
>> >       emulator domain that uses direct resource mapping (which depends
>> >       on the version of privcmd it happens to have) in which case it
>> >       will not require use of GFNs for the ioreq server shared
>> >       pages.
>> 
>> Meaning this wants to be backported till where?
>> 
> 
> This fix is 4.12 specific because it is predicated on removal of default 
> ioreq server support.

Ah, good.

>> > A similar compatibility issue with migrated-in VMs exists with Xen 4.11
>> > because the upstream QEMU fall-back to use legacy ioreq server was
>> broken
>> > when direct resource mapping was introduced.
>> > This is because, prior to the resource mapping patches, it was the
>> creation
>> > of the non-default ioreq server that failed if GFNs were not available
>> > whereas, as of 4.11, it is retrieval of the info that fails which does
>> not
>> > trigger the fall-back.
>> 
>> Are you working on a fix or workaround for this, too, then?
>> 
> 
> Not yet. I'm not sure how to approach this. There are a few options:
> 
> 1. Backport default IOREQ server removal and this fix
> 2. Do a bespoke 4.11 fix that forces IOREQ server creation to fail if there 
> are no GFNs available, thus triggering the default IOREQ server fallback in 
> QEMU.
> 3. Upstream a fix into QEMU to do a fallback at the point that it fails to 
> get GFNs i.e. have it close its IOREQ server and then fall back to default.
> 
> The problem with 1 is that it breaks qemu trad. 2 is probably simplest, but 
> if the emualator can do resource mapping it is unnecessary. 3 is probably 
> best, but it's not our fix to deliver.
> 
> Thoughts?

2 indeed looks best to me then. Though I'm not sure I understand
what you say correctly: Would triggering the default IOREQ server
fallback be a step backwards, if the emulator is capable and able to
use resource mapping? If so, somehow avoiding this would of
course be nice, and I'd then assume this isn't reasonable to achieve
without a qemu side change, in which case the solution wouldn't be
any better than 3 anymore.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/2] x86/hvm/ioreq: allow ioreq servers to use HVM_PARAM_[BUF]IOREQ_PFN
  2018-10-08 14:59       ` Jan Beulich
@ 2018-10-08 15:05         ` Paul Durrant
  0 siblings, 0 replies; 10+ messages in thread
From: Paul Durrant @ 2018-10-08 15:05 UTC (permalink / raw)
  To: 'Jan Beulich'; +Cc: Andrew Cooper, Wei Liu, xen-devel

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 08 October 2018 15:59
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Wei Liu
> <wei.liu2@citrix.com>; xen-devel <xen-devel@lists.xenproject.org>
> Subject: RE: [PATCH 2/2] x86/hvm/ioreq: allow ioreq servers to use
> HVM_PARAM_[BUF]IOREQ_PFN
> 
> >>> On 08.10.18 at 16:38, <Paul.Durrant@citrix.com> wrote:
> >>  -----Original Message-----
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> Sent: 08 October 2018 14:29
> >> To: Paul Durrant <Paul.Durrant@citrix.com>
> >> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Wei Liu
> >> <wei.liu2@citrix.com>; xen-devel <xen-devel@lists.xenproject.org>
> >> Subject: Re: [PATCH 2/2] x86/hvm/ioreq: allow ioreq servers to use
> >> HVM_PARAM_[BUF]IOREQ_PFN
> >>
> >> >>> On 05.10.18 at 15:43, <paul.durrant@citrix.com> wrote:
> >> > Since commit 2c257bd6 "x86/hvm: remove default ioreq server (again)"
> the
> >> > GFNs allocated by the toolstack and set in HVM_PARAM_IOREQ_PFN and
> >> > HVM_PARAM_BUFIOREQ_PFN have been unused. This patch allows them to be
> >> used
> >> > by (non-default) ioreq servers.
> >> >
> >> > NOTE: This fixes a compatibility issue. A guest created on a version
> of
> >> >       Xen that pre-dates the initial ioreq server implementation and
> >> then
> >> >       migrated in will currently fail to resume because its migration
> >> >       stream will lack values for HVM_PARAM_IOREQ_SERVER_PFN and
> >> >       HVM_PARAM_NR_IOREQ_SERVER_PAGES *unless* the system has an
> >> >       emulator domain that uses direct resource mapping (which
> depends
> >> >       on the version of privcmd it happens to have) in which case it
> >> >       will not require use of GFNs for the ioreq server shared
> >> >       pages.
> >>
> >> Meaning this wants to be backported till where?
> >>
> >
> > This fix is 4.12 specific because it is predicated on removal of default
> > ioreq server support.
> 
> Ah, good.
> 
> >> > A similar compatibility issue with migrated-in VMs exists with Xen
> 4.11
> >> > because the upstream QEMU fall-back to use legacy ioreq server was
> >> broken
> >> > when direct resource mapping was introduced.
> >> > This is because, prior to the resource mapping patches, it was the
> >> creation
> >> > of the non-default ioreq server that failed if GFNs were not
> available
> >> > whereas, as of 4.11, it is retrieval of the info that fails which
> does
> >> not
> >> > trigger the fall-back.
> >>
> >> Are you working on a fix or workaround for this, too, then?
> >>
> >
> > Not yet. I'm not sure how to approach this. There are a few options:
> >
> > 1. Backport default IOREQ server removal and this fix
> > 2. Do a bespoke 4.11 fix that forces IOREQ server creation to fail if
> there
> > are no GFNs available, thus triggering the default IOREQ server fallback
> in
> > QEMU.
> > 3. Upstream a fix into QEMU to do a fallback at the point that it fails
> to
> > get GFNs i.e. have it close its IOREQ server and then fall back to
> default.
> >
> > The problem with 1 is that it breaks qemu trad. 2 is probably simplest,
> but
> > if the emualator can do resource mapping it is unnecessary. 3 is
> probably
> > best, but it's not our fix to deliver.
> >
> > Thoughts?
> 
> 2 indeed looks best to me then. Though I'm not sure I understand
> what you say correctly: Would triggering the default IOREQ server
> fallback be a step backwards, if the emulator is capable and able to
> use resource mapping?

Yes. With resource mapping the emulator doesn't rely in pages on special GFNs and so it would indeed be a step backwards...

> If so, somehow avoiding this would of
> course be nice, and I'd then assume this isn't reasonable to achieve
> without a qemu side change, in which case the solution wouldn't be
> any better than 3 anymore.
> 

...but avoiding would indeed mean a change in QEMU.

Given that the chances of having a VM migrated all the way in from some ancient Xen without a reboot along the way at any point is probably quite small, and that no-one is likely to notice a fall-back to default IOREQ server unless they are really looking for it, I'd say let's go with 2.

  Paul

> Jan
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2018-10-08 15:05 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-05 13:43 [PATCH 0/2] ioreq: make use of 'legacy' GFNs Paul Durrant
2018-10-05 13:43 ` [PATCH 1/2] x86/hvm: make sure HVM_PARAM_[BUF]IOREQ_PFN can only be set once Paul Durrant
2018-10-08 13:16   ` Jan Beulich
2018-10-08 13:20   ` Jan Beulich
2018-10-08 13:21     ` Paul Durrant
2018-10-05 13:43 ` [PATCH 2/2] x86/hvm/ioreq: allow ioreq servers to use HVM_PARAM_[BUF]IOREQ_PFN Paul Durrant
2018-10-08 13:29   ` Jan Beulich
2018-10-08 14:38     ` Paul Durrant
2018-10-08 14:59       ` Jan Beulich
2018-10-08 15:05         ` Paul Durrant

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.