* [PATCH 1/1] x86/hvm: prevent hvm_free_ioreq_gmfn() clobber of arbitrary memory
@ 2015-04-13 16:01 Don Slutz
2015-04-14 11:47 ` Jan Beulich
0 siblings, 1 reply; 10+ messages in thread
From: Don Slutz @ 2015-04-13 16:01 UTC (permalink / raw)
To: xen-devel; +Cc: Andrew Cooper, Keir Fraser, Don Slutz, Jan Beulich
This will prevent a hard to track down bug. It is related to
commit ffdb781883abd3215287ba1b1853f3d437d1240c
x86/hvm: prevent gcc uninitialised var warning
Which will preset "gmfn" to ~0UL.
This code will check if there is a path where bufioreq_pfn is passed
to hvm_free_ioreq_gmfn() and it is uninitialised, the BUG_ON will
report it.
Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Don Slutz <dslutz@verizon.com>
---
xen/arch/x86/hvm/hvm.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index ade99c0..0abac7c 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -536,8 +536,9 @@ static int hvm_alloc_ioreq_gmfn(struct domain *d, unsigned long *gmfn)
static void hvm_free_ioreq_gmfn(struct domain *d, unsigned long gmfn)
{
- unsigned int i = gmfn - d->arch.hvm_domain.ioreq_gmfn.base;
+ unsigned long i = gmfn - d->arch.hvm_domain.ioreq_gmfn.base;
+ BUG_ON(i >= sizeof(d->arch.hvm_domain.ioreq_gmfn.mask) * 8);
clear_bit(i, &d->arch.hvm_domain.ioreq_gmfn.mask);
}
--
1.8.4
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH 1/1] x86/hvm: prevent hvm_free_ioreq_gmfn() clobber of arbitrary memory
2015-04-13 16:01 [PATCH 1/1] x86/hvm: prevent hvm_free_ioreq_gmfn() clobber of arbitrary memory Don Slutz
@ 2015-04-14 11:47 ` Jan Beulich
2015-04-14 13:17 ` Andrew Cooper
0 siblings, 1 reply; 10+ messages in thread
From: Jan Beulich @ 2015-04-14 11:47 UTC (permalink / raw)
To: Andrew Cooper, Don Slutz; +Cc: Keir Fraser, xen-devel
>>> On 13.04.15 at 18:01, <dslutz@verizon.com> wrote:
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -536,8 +536,9 @@ static int hvm_alloc_ioreq_gmfn(struct domain *d,
> unsigned long *gmfn)
>
> static void hvm_free_ioreq_gmfn(struct domain *d, unsigned long gmfn)
> {
> - unsigned int i = gmfn - d->arch.hvm_domain.ioreq_gmfn.base;
> + unsigned long i = gmfn - d->arch.hvm_domain.ioreq_gmfn.base;
>
> + BUG_ON(i >= sizeof(d->arch.hvm_domain.ioreq_gmfn.mask) * 8);
> clear_bit(i, &d->arch.hvm_domain.ioreq_gmfn.mask);
> }
I'd be happier with an ASSERT() - Andrew?
Jan
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 1/1] x86/hvm: prevent hvm_free_ioreq_gmfn() clobber of arbitrary memory
2015-04-14 11:47 ` Jan Beulich
@ 2015-04-14 13:17 ` Andrew Cooper
2015-04-14 22:54 ` Don Slutz
2015-04-16 10:53 ` Tim Deegan
0 siblings, 2 replies; 10+ messages in thread
From: Andrew Cooper @ 2015-04-14 13:17 UTC (permalink / raw)
To: Jan Beulich, Don Slutz; +Cc: Keir Fraser, xen-devel
On 14/04/15 12:47, Jan Beulich wrote:
>>>> On 13.04.15 at 18:01, <dslutz@verizon.com> wrote:
>> --- a/xen/arch/x86/hvm/hvm.c
>> +++ b/xen/arch/x86/hvm/hvm.c
>> @@ -536,8 +536,9 @@ static int hvm_alloc_ioreq_gmfn(struct domain *d,
>> unsigned long *gmfn)
>>
>> static void hvm_free_ioreq_gmfn(struct domain *d, unsigned long gmfn)
>> {
>> - unsigned int i = gmfn - d->arch.hvm_domain.ioreq_gmfn.base;
>> + unsigned long i = gmfn - d->arch.hvm_domain.ioreq_gmfn.base;
>>
>> + BUG_ON(i >= sizeof(d->arch.hvm_domain.ioreq_gmfn.mask) * 8);
>> clear_bit(i, &d->arch.hvm_domain.ioreq_gmfn.mask);
>> }
> I'd be happier with an ASSERT() - Andrew?
If I recall, this is a follow on from the compiler error, where gmfn now
gets initialised to ~0 to avoid a build failure.
If gcc is correct and there is a way for gmfn to be used, then the
clear_bit() here clobber memory. The BUG_ON() serves as a protection
against the clobbering.
If however gcc was actually wrong, then the code here is actually fine,
and a BUG_ON() or ASSERT() will never actually trigger.
In addition, not a hotpath in the slightest, so performance isn't a concern.
I have still not managed to conclusively work out whether gcc is correct
or wrong. As a result, I would lean in the direction of BUG_ON() rather
than ASSERT(), out of paranoia. However, I would prefer even more a
solution where we were certain that gmfn isn't bogus.
~Andrew
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 1/1] x86/hvm: prevent hvm_free_ioreq_gmfn() clobber of arbitrary memory
2015-04-14 13:17 ` Andrew Cooper
@ 2015-04-14 22:54 ` Don Slutz
2015-04-16 10:53 ` Tim Deegan
1 sibling, 0 replies; 10+ messages in thread
From: Don Slutz @ 2015-04-14 22:54 UTC (permalink / raw)
To: Andrew Cooper, Jan Beulich; +Cc: Keir Fraser, xen-devel
On 04/14/15 09:17, Andrew Cooper wrote:
> On 14/04/15 12:47, Jan Beulich wrote:
>>>>> On 13.04.15 at 18:01, <dslutz@verizon.com> wrote:
>>> --- a/xen/arch/x86/hvm/hvm.c
>>> +++ b/xen/arch/x86/hvm/hvm.c
>>> @@ -536,8 +536,9 @@ static int hvm_alloc_ioreq_gmfn(struct domain *d,
>>> unsigned long *gmfn)
>>>
>>> static void hvm_free_ioreq_gmfn(struct domain *d, unsigned long gmfn)
>>> {
>>> - unsigned int i = gmfn - d->arch.hvm_domain.ioreq_gmfn.base;
>>> + unsigned long i = gmfn - d->arch.hvm_domain.ioreq_gmfn.base;
>>>
>>> + BUG_ON(i >= sizeof(d->arch.hvm_domain.ioreq_gmfn.mask) * 8);
>>> clear_bit(i, &d->arch.hvm_domain.ioreq_gmfn.mask);
>>> }
>> I'd be happier with an ASSERT() - Andrew?
>
> If I recall, this is a follow on from the compiler error, where gmfn now
> gets initialised to ~0 to avoid a build failure.
>
> If gcc is correct and there is a way for gmfn to be used, then the
> clear_bit() here clobber memory. The BUG_ON() serves as a protection
> against the clobbering.
>
> If however gcc was actually wrong, then the code here is actually fine,
> and a BUG_ON() or ASSERT() will never actually trigger.
>
> In addition, not a hotpath in the slightest, so performance isn't a concern.
>
>
> I have still not managed to conclusively work out whether gcc is correct
> or wrong. As a result, I would lean in the direction of BUG_ON() rather
> than ASSERT(), out of paranoia. However, I would prefer even more a
> solution where we were certain that gmfn isn't bogus.
>
The best I can do is to write out the code paths that I can see. This
is all taken from RELEASE-4.5.0
There are 4 calls to hvm_free_ioreq_gmfn:
1 x86/hvm/hvm.c hvm_ioreq_server_map_pages 764 hvm_free_ioreq_gmfn(d,
bufioreq_pfn);
2 x86/hvm/hvm.c hvm_ioreq_server_map_pages 768 hvm_free_ioreq_gmfn(d,
ioreq_pfn);
3 x86/hvm/hvm.c hvm_ioreq_server_unmap_pages 788 hvm_free_ioreq_gmfn(d,
s->bufioreq.gmfn);
4 x86/hvm/hvm.c hvm_ioreq_server_unmap_pages 790 hvm_free_ioreq_gmfn(d,
s->ioreq.gmfn);
GCC only reported about bufioreq_pfn:
hvm.c: In function ‘hvm_create_ioreq_server’:
hvm.c:487:18: error: ‘bufioreq_pfn’ may be used uninitialised in this
function
Which is the line:
unsigned int i = gmfn - d->arch.hvm_domain.ioreq_gmfn.base;
And gcc has changed the arg 'gmfn' into 'bufioreq_pfn' so this should
only apply to #1.
Now for #1, the args to hvm_ioreq_server_map_pages() 'is_default' and
'handle_bufioreq' are important. The call that GCC is reporting on
(with context):
--------------------------------------------------------------------------
return 0;
fail4:
hvm_unmap_ioreq_page(s, 0);
fail3:
if ( !is_default && handle_bufioreq )
hvm_free_ioreq_gmfn(d, bufioreq_pfn);
--------------------------------------------------------------------------
So only the case:
'is_default' === 0, 'handle_bufioreq' === 1 needs to be checked for
all goto's of fail4 or fail3.
Now the call to hvm_alloc_ioreq_gmfn(d, &bufioreq_pfn) does have a
path where bufioreq_pfn is not set, however it will return -ENOMEM
for this case. And so should always goto fail2:
---------------------------------------------------------------------
if ( handle_bufioreq )
{
rc = hvm_alloc_ioreq_gmfn(d, &bufioreq_pfn);
if ( rc )
goto fail2;
}
}
rc = hvm_map_ioreq_page(s, 0, ioreq_pfn);
if ( rc )
goto fail3;
---------------------------------------------------------------------
Since both share 'handle_bufioreq', the only time I can see
'handle_bufioreq' not uninitialised is when we goto fail2.
The ways to fail3 & fail4:
---------------------------------------------------------------------
rc = hvm_map_ioreq_page(s, 0, ioreq_pfn);
if ( rc )
goto fail3;
if ( handle_bufioreq )
{
rc = hvm_map_ioreq_page(s, 1, bufioreq_pfn);
if ( rc )
goto fail4;
}
return 0;
--------------------------------------------------------------------
are all after the goto for fail2. So I see hvm_alloc_ioreq_gmfn()
setting 'bufioreq_pfn' in all cases where it is possible to get to
fail3.
Hope this helps.
-Don Slutz
> ~Andrew
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 1/1] x86/hvm: prevent hvm_free_ioreq_gmfn() clobber of arbitrary memory
2015-04-14 13:17 ` Andrew Cooper
2015-04-14 22:54 ` Don Slutz
@ 2015-04-16 10:53 ` Tim Deegan
2015-04-16 11:32 ` Jan Beulich
1 sibling, 1 reply; 10+ messages in thread
From: Tim Deegan @ 2015-04-16 10:53 UTC (permalink / raw)
To: Andrew Cooper; +Cc: Keir Fraser, Don Slutz, Jan Beulich, xen-devel
At 14:17 +0100 on 14 Apr (1429021061), Andrew Cooper wrote:
> On 14/04/15 12:47, Jan Beulich wrote:
> >>>> On 13.04.15 at 18:01, <dslutz@verizon.com> wrote:
> >> --- a/xen/arch/x86/hvm/hvm.c
> >> +++ b/xen/arch/x86/hvm/hvm.c
> >> @@ -536,8 +536,9 @@ static int hvm_alloc_ioreq_gmfn(struct domain *d,
> >> unsigned long *gmfn)
> >>
> >> static void hvm_free_ioreq_gmfn(struct domain *d, unsigned long gmfn)
> >> {
> >> - unsigned int i = gmfn - d->arch.hvm_domain.ioreq_gmfn.base;
> >> + unsigned long i = gmfn - d->arch.hvm_domain.ioreq_gmfn.base;
> >>
> >> + BUG_ON(i >= sizeof(d->arch.hvm_domain.ioreq_gmfn.mask) * 8);
> >> clear_bit(i, &d->arch.hvm_domain.ioreq_gmfn.mask);
> >> }
> > I'd be happier with an ASSERT() - Andrew?
>
> If I recall, this is a follow on from the compiler error, where gmfn now
> gets initialised to ~0 to avoid a build failure.
>
> If gcc is correct and there is a way for gmfn to be used, then the
> clear_bit() here clobber memory. The BUG_ON() serves as a protection
> against the clobbering.
>
> If however gcc was actually wrong, then the code here is actually fine,
> and a BUG_ON() or ASSERT() will never actually trigger.
>
> In addition, not a hotpath in the slightest, so performance isn't a concern.
>
>
> I have still not managed to conclusively work out whether gcc is correct
> or wrong. As a result, I would lean in the direction of BUG_ON() rather
> than ASSERT(), out of paranoia. However, I would prefer even more a
> solution where we were certain that gmfn isn't bogus.
AFAICT GCC is wrong, though the code is confusing enough to me that I
don't blame GCC for being confused too. :)
I would be inclined to use a bigger hammer here. IMO refactoring like
this makes it easier to reason about (compile tested only):
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 21cf744..bdc2457 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -495,7 +495,8 @@ static void hvm_free_ioreq_gmfn(struct domain *d, unsigned long gmfn)
{
unsigned int i = gmfn - d->arch.hvm_domain.ioreq_gmfn.base;
- clear_bit(i, &d->arch.hvm_domain.ioreq_gmfn.mask);
+ if (gmfn != INVALID_GFN)
+ clear_bit(i, &d->arch.hvm_domain.ioreq_gmfn.mask);
}
static void hvm_unmap_ioreq_page(struct hvm_ioreq_server *s, bool_t buf)
@@ -728,64 +729,61 @@ static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s)
spin_unlock(&s->lock);
}
+
static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s,
- bool_t is_default, bool_t handle_bufioreq)
+ unsigned long ioreq_pfn,
+ unsigned long bufioreq_pfn)
+{
+ int rc;
+
+ rc = hvm_map_ioreq_page(s, 0, ioreq_pfn);
+ if ( rc )
+ return rc;
+
+ if ( bufioreq_pfn != INVALID_GFN )
+ rc = hvm_map_ioreq_page(s, 1, bufioreq_pfn);
+
+ if ( rc )
+ hvm_unmap_ioreq_page(s, 0);
+
+ return rc;
+}
+
+static int hvm_ioreq_server_setup_pages(struct hvm_ioreq_server *s,
+ bool_t is_default,
+ bool_t handle_bufioreq)
{
struct domain *d = s->domain;
- unsigned long ioreq_pfn;
- unsigned long bufioreq_pfn = ~0UL; /* gcc uninitialised var warning */
+ unsigned long ioreq_pfn = INVALID_GFN;
+ unsigned long bufioreq_pfn = INVALID_GFN;
int rc;
if ( is_default )
{
- ioreq_pfn = d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN];
-
/*
* The default ioreq server must handle buffered ioreqs, for
* backwards compatibility.
*/
ASSERT(handle_bufioreq);
- bufioreq_pfn = d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN];
+ return hvm_ioreq_server_map_pages(s,
+ d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN],
+ d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN]);
}
- else
- {
- rc = hvm_alloc_ioreq_gmfn(d, &ioreq_pfn);
- if ( rc )
- goto fail1;
- if ( handle_bufioreq )
- {
- rc = hvm_alloc_ioreq_gmfn(d, &bufioreq_pfn);
- if ( rc )
- goto fail2;
- }
- }
+ rc = hvm_alloc_ioreq_gmfn(d, &ioreq_pfn);
- rc = hvm_map_ioreq_page(s, 0, ioreq_pfn);
- if ( rc )
- goto fail3;
+ if ( !rc && handle_bufioreq )
+ rc = hvm_alloc_ioreq_gmfn(d, &bufioreq_pfn);
- if ( handle_bufioreq )
- {
- rc = hvm_map_ioreq_page(s, 1, bufioreq_pfn);
- if ( rc )
- goto fail4;
- }
+ if ( !rc )
+ rc = hvm_ioreq_server_map_pages(s, ioreq_pfn, bufioreq_pfn);
- return 0;
-
-fail4:
- hvm_unmap_ioreq_page(s, 0);
-
-fail3:
- if ( !is_default && handle_bufioreq )
- hvm_free_ioreq_gmfn(d, bufioreq_pfn);
-
-fail2:
- if ( !is_default )
+ if ( rc )
+ {
hvm_free_ioreq_gmfn(d, ioreq_pfn);
+ hvm_free_ioreq_gmfn(d, bufioreq_pfn);
+ }
-fail1:
return rc;
}
@@ -938,7 +936,7 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, struct domain *d,
if ( rc )
return rc;
- rc = hvm_ioreq_server_map_pages(s, is_default, handle_bufioreq);
+ rc = hvm_ioreq_server_setup_pages(s, is_default, handle_bufioreq);
if ( rc )
goto fail_map;
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH 1/1] x86/hvm: prevent hvm_free_ioreq_gmfn() clobber of arbitrary memory
2015-04-16 10:53 ` Tim Deegan
@ 2015-04-16 11:32 ` Jan Beulich
2015-04-16 16:37 ` Tim Deegan
0 siblings, 1 reply; 10+ messages in thread
From: Jan Beulich @ 2015-04-16 11:32 UTC (permalink / raw)
To: Tim Deegan; +Cc: Andrew Cooper, Keir Fraser, Don Slutz, xen-devel
>>> On 16.04.15 at 12:53, <tim@xen.org> wrote:
> At 14:17 +0100 on 14 Apr (1429021061), Andrew Cooper wrote:
>> On 14/04/15 12:47, Jan Beulich wrote:
>> >>>> On 13.04.15 at 18:01, <dslutz@verizon.com> wrote:
>> >> --- a/xen/arch/x86/hvm/hvm.c
>> >> +++ b/xen/arch/x86/hvm/hvm.c
>> >> @@ -536,8 +536,9 @@ static int hvm_alloc_ioreq_gmfn(struct domain *d,
>> >> unsigned long *gmfn)
>> >>
>> >> static void hvm_free_ioreq_gmfn(struct domain *d, unsigned long gmfn)
>> >> {
>> >> - unsigned int i = gmfn - d->arch.hvm_domain.ioreq_gmfn.base;
>> >> + unsigned long i = gmfn - d->arch.hvm_domain.ioreq_gmfn.base;
>> >>
>> >> + BUG_ON(i >= sizeof(d->arch.hvm_domain.ioreq_gmfn.mask) * 8);
>> >> clear_bit(i, &d->arch.hvm_domain.ioreq_gmfn.mask);
>> >> }
>> > I'd be happier with an ASSERT() - Andrew?
>>
>> If I recall, this is a follow on from the compiler error, where gmfn now
>> gets initialised to ~0 to avoid a build failure.
>>
>> If gcc is correct and there is a way for gmfn to be used, then the
>> clear_bit() here clobber memory. The BUG_ON() serves as a protection
>> against the clobbering.
>>
>> If however gcc was actually wrong, then the code here is actually fine,
>> and a BUG_ON() or ASSERT() will never actually trigger.
>>
>> In addition, not a hotpath in the slightest, so performance isn't a concern.
>>
>>
>> I have still not managed to conclusively work out whether gcc is correct
>> or wrong. As a result, I would lean in the direction of BUG_ON() rather
>> than ASSERT(), out of paranoia. However, I would prefer even more a
>> solution where we were certain that gmfn isn't bogus.
>
> AFAICT GCC is wrong, though the code is confusing enough to me that I
> don't blame GCC for being confused too. :)
>
> I would be inclined to use a bigger hammer here. IMO refactoring like
> this makes it easier to reason about (compile tested only):
This looks like a pretty nice cleanup; I particularly like the 4 labels
going away.
Jan
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 1/1] x86/hvm: prevent hvm_free_ioreq_gmfn() clobber of arbitrary memory
2015-04-16 11:32 ` Jan Beulich
@ 2015-04-16 16:37 ` Tim Deegan
2015-04-16 18:44 ` Andrew Cooper
2015-04-17 7:45 ` Jan Beulich
0 siblings, 2 replies; 10+ messages in thread
From: Tim Deegan @ 2015-04-16 16:37 UTC (permalink / raw)
To: xen-devel; +Cc: Andrew Cooper, Keir Fraser, Jan Beulich, Don Slutz
At 12:32 +0100 on 16 Apr (1429187564), Jan Beulich wrote:
> >>> On 16.04.15 at 12:53, <tim@xen.org> wrote:
> > I would be inclined to use a bigger hammer here. IMO refactoring like
> > this makes it easier to reason about (compile tested only):
>
> This looks like a pretty nice cleanup; I particularly like the 4 labels
> going away.
OK, here it is as a proper patch. I still haven't tested it -- indeed
I'm not sure how to test multiple ioreq clients. Any suggestions?
>From 67957b954f2b8d58b635a8e5fdc818154ec9e4ff Mon Sep 17 00:00:00 2001
From: Tim Deegan <tim@xen.org>
Date: Thu, 16 Apr 2015 17:34:24 +0100
Subject: [PATCH] x86/hvm: refactor code that allocates ioreq gfns.
It was confusing GCC's uninitialized-variable detection.
Signed-off-by: Tim Deegan <tim@xen.org>
---
xen/arch/x86/hvm/hvm.c | 79 ++++++++++++++++++++++++--------------------------
1 file changed, 38 insertions(+), 41 deletions(-)
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 21cf744..db7bac3 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -495,7 +495,8 @@ static void hvm_free_ioreq_gmfn(struct domain *d, unsigned long gmfn)
{
unsigned int i = gmfn - d->arch.hvm_domain.ioreq_gmfn.base;
- clear_bit(i, &d->arch.hvm_domain.ioreq_gmfn.mask);
+ if (gmfn != INVALID_GFN)
+ clear_bit(i, &d->arch.hvm_domain.ioreq_gmfn.mask);
}
static void hvm_unmap_ioreq_page(struct hvm_ioreq_server *s, bool_t buf)
@@ -729,63 +730,59 @@ static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s)
}
static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s,
- bool_t is_default, bool_t handle_bufioreq)
+ unsigned long ioreq_pfn,
+ unsigned long bufioreq_pfn)
+{
+ int rc;
+
+ rc = hvm_map_ioreq_page(s, 0, ioreq_pfn);
+ if ( rc )
+ return rc;
+
+ if ( bufioreq_pfn != INVALID_GFN )
+ rc = hvm_map_ioreq_page(s, 1, bufioreq_pfn);
+
+ if ( rc )
+ hvm_unmap_ioreq_page(s, 0);
+
+ return rc;
+}
+
+static int hvm_ioreq_server_setup_pages(struct hvm_ioreq_server *s,
+ bool_t is_default,
+ bool_t handle_bufioreq)
{
struct domain *d = s->domain;
- unsigned long ioreq_pfn;
- unsigned long bufioreq_pfn = ~0UL; /* gcc uninitialised var warning */
+ unsigned long ioreq_pfn = INVALID_GFN;
+ unsigned long bufioreq_pfn = INVALID_GFN;
int rc;
if ( is_default )
{
- ioreq_pfn = d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN];
-
/*
* The default ioreq server must handle buffered ioreqs, for
* backwards compatibility.
*/
ASSERT(handle_bufioreq);
- bufioreq_pfn = d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN];
- }
- else
- {
- rc = hvm_alloc_ioreq_gmfn(d, &ioreq_pfn);
- if ( rc )
- goto fail1;
-
- if ( handle_bufioreq )
- {
- rc = hvm_alloc_ioreq_gmfn(d, &bufioreq_pfn);
- if ( rc )
- goto fail2;
- }
+ return hvm_ioreq_server_map_pages(s,
+ d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN],
+ d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN]);
}
- rc = hvm_map_ioreq_page(s, 0, ioreq_pfn);
- if ( rc )
- goto fail3;
+ rc = hvm_alloc_ioreq_gmfn(d, &ioreq_pfn);
- if ( handle_bufioreq )
- {
- rc = hvm_map_ioreq_page(s, 1, bufioreq_pfn);
- if ( rc )
- goto fail4;
- }
+ if ( !rc && handle_bufioreq )
+ rc = hvm_alloc_ioreq_gmfn(d, &bufioreq_pfn);
- return 0;
-
-fail4:
- hvm_unmap_ioreq_page(s, 0);
-
-fail3:
- if ( !is_default && handle_bufioreq )
- hvm_free_ioreq_gmfn(d, bufioreq_pfn);
+ if ( !rc )
+ rc = hvm_ioreq_server_map_pages(s, ioreq_pfn, bufioreq_pfn);
-fail2:
- if ( !is_default )
+ if ( rc )
+ {
hvm_free_ioreq_gmfn(d, ioreq_pfn);
+ hvm_free_ioreq_gmfn(d, bufioreq_pfn);
+ }
-fail1:
return rc;
}
@@ -938,7 +935,7 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, struct domain *d,
if ( rc )
return rc;
- rc = hvm_ioreq_server_map_pages(s, is_default, handle_bufioreq);
+ rc = hvm_ioreq_server_setup_pages(s, is_default, handle_bufioreq);
if ( rc )
goto fail_map;
--
2.1.4
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH 1/1] x86/hvm: prevent hvm_free_ioreq_gmfn() clobber of arbitrary memory
2015-04-16 16:37 ` Tim Deegan
@ 2015-04-16 18:44 ` Andrew Cooper
2015-04-17 7:45 ` Jan Beulich
1 sibling, 0 replies; 10+ messages in thread
From: Andrew Cooper @ 2015-04-16 18:44 UTC (permalink / raw)
To: Tim Deegan, xen-devel; +Cc: Keir Fraser, Jan Beulich, Don Slutz
On 16/04/15 17:37, Tim Deegan wrote:
> At 12:32 +0100 on 16 Apr (1429187564), Jan Beulich wrote:
>>>>> On 16.04.15 at 12:53, <tim@xen.org> wrote:
>>> I would be inclined to use a bigger hammer here. IMO refactoring like
>>> this makes it easier to reason about (compile tested only):
>> This looks like a pretty nice cleanup; I particularly like the 4 labels
>> going away.
> OK, here it is as a proper patch. I still haven't tested it -- indeed
> I'm not sure how to test multiple ioreq clients. Any suggestions?
>
> From 67957b954f2b8d58b635a8e5fdc818154ec9e4ff Mon Sep 17 00:00:00 2001
> From: Tim Deegan <tim@xen.org>
> Date: Thu, 16 Apr 2015 17:34:24 +0100
> Subject: [PATCH] x86/hvm: refactor code that allocates ioreq gfns.
>
> It was confusing GCC's uninitialized-variable detection.
>
> Signed-off-by: Tim Deegan <tim@xen.org>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 1/1] x86/hvm: prevent hvm_free_ioreq_gmfn() clobber of arbitrary memory
2015-04-16 16:37 ` Tim Deegan
2015-04-16 18:44 ` Andrew Cooper
@ 2015-04-17 7:45 ` Jan Beulich
1 sibling, 0 replies; 10+ messages in thread
From: Jan Beulich @ 2015-04-17 7:45 UTC (permalink / raw)
To: Paul Durrant, Tim Deegan; +Cc: Andrew Cooper, Keir Fraser, Don Slutz, xen-devel
>>> On 16.04.15 at 18:37, <tim@xen.org> wrote:
> At 12:32 +0100 on 16 Apr (1429187564), Jan Beulich wrote:
>> >>> On 16.04.15 at 12:53, <tim@xen.org> wrote:
>> > I would be inclined to use a bigger hammer here. IMO refactoring like
>> > this makes it easier to reason about (compile tested only):
>>
>> This looks like a pretty nice cleanup; I particularly like the 4 labels
>> going away.
>
> OK, here it is as a proper patch. I still haven't tested it -- indeed
> I'm not sure how to test multiple ioreq clients. Any suggestions?
Paul?
Thanks, Jan
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 1/1] x86/hvm: prevent hvm_free_ioreq_gmfn() clobber of arbitrary memory
@ 2015-04-23 13:25 Paul Durrant
0 siblings, 0 replies; 10+ messages in thread
From: Paul Durrant @ 2015-04-23 13:25 UTC (permalink / raw)
To: Jan Beulich (JBeulich@suse.com)
Cc: Andrew Cooper, Keir (Xen.org), Don Slutz (dslutz@verizon.com), xen-devel
Apologies for breaking the threading. The mail server ate the thread and I couldn't manage to get it back...
>[snip]
>> OK, here it is as a proper patch. I still haven't tested it -- indeed
>> I'm not sure how to test multiple ioreq clients. Any suggestions?
>
>Paul?
>
>Thanks, Jan
Testing multiple is a bit tricky... Best thing to do, I think, is:
- Configure an HVM guest to use latest upstream QEMU (post my patch 3996e85c1822e05c50250f8d2d1e57b6bea1229d), which will give you one non-default ioreq server.
- Grab my demu repo from xenbits (git://xenbits.xen.org/people/pauldu/demu.git) and build it (master branch should be ok).
- Bring up the guest paused
- Start demu from a dom0 root shell, e.g.:
demu --domain=<domid> --device=31 --function=0
- Unpause the domain
After it comes up you should be able to see device 31 on the PCI bus, and it should appear to be a SCSI controller.
Paul
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2015-04-23 13:25 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-04-13 16:01 [PATCH 1/1] x86/hvm: prevent hvm_free_ioreq_gmfn() clobber of arbitrary memory Don Slutz
2015-04-14 11:47 ` Jan Beulich
2015-04-14 13:17 ` Andrew Cooper
2015-04-14 22:54 ` Don Slutz
2015-04-16 10:53 ` Tim Deegan
2015-04-16 11:32 ` Jan Beulich
2015-04-16 16:37 ` Tim Deegan
2015-04-16 18:44 ` Andrew Cooper
2015-04-17 7:45 ` Jan Beulich
2015-04-23 13:25 Paul Durrant
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.