* [Qemu-devel] [PATCH 0/3] xen: XSA-197 follow-ups @ 2016-11-23 9:20 Jan Beulich 2016-11-23 9:24 ` [PATCH 1/3] xen: fix quad word bufioreq handling Jan Beulich ` (3 more replies) 0 siblings, 4 replies; 27+ messages in thread From: Jan Beulich @ 2016-11-23 9:20 UTC (permalink / raw) To: qemu-devel; +Cc: anthony.perard, Paul Durrant, Stefano Stabellini, xen-devel 1: fix quad word bufioreq handling 2: slightly simplify bufioreq handling 3: ignore direction in bufioreq handling Signed-off-by: Jan Beulich <jbeulich@suse.com> ^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH 1/3] xen: fix quad word bufioreq handling 2016-11-23 9:20 [Qemu-devel] [PATCH 0/3] xen: XSA-197 follow-ups Jan Beulich @ 2016-11-23 9:24 ` Jan Beulich 2016-11-23 9:24 ` [Qemu-devel] " Jan Beulich ` (2 subsequent siblings) 3 siblings, 0 replies; 27+ messages in thread From: Jan Beulich @ 2016-11-23 9:24 UTC (permalink / raw) To: qemu-devel; +Cc: anthony.perard, xen-devel, Paul Durrant, Stefano Stabellini We should not consume the second slot if it didn't get written yet. Normal writers - i.e. Xen - would not update write_pointer between the two writes, but the page may get fiddled with by the guest itself, and we're better off entering an infinite loop in that case. Reported-by: yanghongke <yanghongke@huawei.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> --- TBD: Alternatively we could call e.g. hw_error() instead. --- a/xen-hvm.c +++ b/xen-hvm.c @@ -1021,6 +1021,9 @@ static int handle_buffered_iopage(XenIOS xen_rmb(); qw = (req.size == 8); if (qw) { + if (rdptr + 1 == wrptr) { + break; + } buf_req = &buf_page->buf_ioreq[(rdptr + 1) % IOREQ_BUFFER_SLOT_NUM]; req.data |= ((uint64_t)buf_req->data) << 32; _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 27+ messages in thread
* [Qemu-devel] [PATCH 1/3] xen: fix quad word bufioreq handling 2016-11-23 9:20 [Qemu-devel] [PATCH 0/3] xen: XSA-197 follow-ups Jan Beulich 2016-11-23 9:24 ` [PATCH 1/3] xen: fix quad word bufioreq handling Jan Beulich @ 2016-11-23 9:24 ` Jan Beulich 2016-11-23 9:48 ` Paul Durrant 2016-11-23 9:48 ` Paul Durrant 2016-11-23 9:24 ` Jan Beulich 2016-11-23 9:25 ` Jan Beulich 3 siblings, 2 replies; 27+ messages in thread From: Jan Beulich @ 2016-11-23 9:24 UTC (permalink / raw) To: qemu-devel; +Cc: anthony.perard, Paul Durrant, Stefano Stabellini, xen-devel We should not consume the second slot if it didn't get written yet. Normal writers - i.e. Xen - would not update write_pointer between the two writes, but the page may get fiddled with by the guest itself, and we're better off entering an infinite loop in that case. Reported-by: yanghongke <yanghongke@huawei.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> --- TBD: Alternatively we could call e.g. hw_error() instead. --- a/xen-hvm.c +++ b/xen-hvm.c @@ -1021,6 +1021,9 @@ static int handle_buffered_iopage(XenIOS xen_rmb(); qw = (req.size == 8); if (qw) { + if (rdptr + 1 == wrptr) { + break; + } buf_req = &buf_page->buf_ioreq[(rdptr + 1) % IOREQ_BUFFER_SLOT_NUM]; req.data |= ((uint64_t)buf_req->data) << 32; ^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [Qemu-devel] [PATCH 1/3] xen: fix quad word bufioreq handling 2016-11-23 9:24 ` [Qemu-devel] " Jan Beulich @ 2016-11-23 9:48 ` Paul Durrant 2016-11-23 10:36 ` Jan Beulich 2016-11-23 10:36 ` [Qemu-devel] " Jan Beulich 2016-11-23 9:48 ` Paul Durrant 1 sibling, 2 replies; 27+ messages in thread From: Paul Durrant @ 2016-11-23 9:48 UTC (permalink / raw) To: Jan Beulich, qemu-devel; +Cc: Anthony Perard, Stefano Stabellini, xen-devel > -----Original Message----- > From: Jan Beulich [mailto:JBeulich@suse.com] > Sent: 23 November 2016 09:24 > To: qemu-devel@nongnu.org > Cc: Anthony Perard <anthony.perard@citrix.com>; Paul Durrant > <Paul.Durrant@citrix.com>; Stefano Stabellini <sstabellini@kernel.org>; xen- > devel <xen-devel@lists.xenproject.org> > Subject: [PATCH 1/3] xen: fix quad word bufioreq handling > > We should not consume the second slot if it didn't get written yet. > Normal writers - i.e. Xen - would not update write_pointer between the > two writes, but the page may get fiddled with by the guest itself, and > we're better off entering an infinite loop in that case. > Xen would never put QEMU in this situation and the guest can't actually modify the page whilst it's in use, since activation of the IOREQ server removes the page from the guest's p2m so the premise of the patch is not correct. Paul > Reported-by: yanghongke <yanghongke@huawei.com> > Signed-off-by: Jan Beulich <jbeulich@suse.com> > --- > TBD: Alternatively we could call e.g. hw_error() instead. > > --- a/xen-hvm.c > +++ b/xen-hvm.c > @@ -1021,6 +1021,9 @@ static int handle_buffered_iopage(XenIOS > xen_rmb(); > qw = (req.size == 8); > if (qw) { > + if (rdptr + 1 == wrptr) { > + break; > + } > buf_req = &buf_page->buf_ioreq[(rdptr + 1) % > IOREQ_BUFFER_SLOT_NUM]; > req.data |= ((uint64_t)buf_req->data) << 32; > > ^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 1/3] xen: fix quad word bufioreq handling 2016-11-23 9:48 ` Paul Durrant @ 2016-11-23 10:36 ` Jan Beulich 2016-11-23 10:36 ` [Qemu-devel] " Jan Beulich 1 sibling, 0 replies; 27+ messages in thread From: Jan Beulich @ 2016-11-23 10:36 UTC (permalink / raw) To: Paul Durrant; +Cc: Anthony Perard, xen-devel, Stefano Stabellini, qemu-devel >>> On 23.11.16 at 10:48, <Paul.Durrant@citrix.com> wrote: >> From: Jan Beulich [mailto:JBeulich@suse.com] >> Sent: 23 November 2016 09:24 >> >> We should not consume the second slot if it didn't get written yet. >> Normal writers - i.e. Xen - would not update write_pointer between the >> two writes, but the page may get fiddled with by the guest itself, and >> we're better off entering an infinite loop in that case. >> > > Xen would never put QEMU in this situation and the guest can't actually > modify the page whilst it's in use, since activation of the IOREQ server > removes the page from the guest's p2m so the premise of the patch is not > correct. Is that the case even for pre-ioreq-server Xen versions? The issue here was reported together with what became XSA-197, and it's not been assigned its own XSA just because there are other ways for a guest to place high load on its qemu process (and there are ways to deal with such high load situations). Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [Qemu-devel] [PATCH 1/3] xen: fix quad word bufioreq handling 2016-11-23 9:48 ` Paul Durrant 2016-11-23 10:36 ` Jan Beulich @ 2016-11-23 10:36 ` Jan Beulich 2016-11-23 10:45 ` Paul Durrant 2016-11-23 10:45 ` Paul Durrant 1 sibling, 2 replies; 27+ messages in thread From: Jan Beulich @ 2016-11-23 10:36 UTC (permalink / raw) To: Paul Durrant; +Cc: Anthony Perard, Stefano Stabellini, xen-devel, qemu-devel >>> On 23.11.16 at 10:48, <Paul.Durrant@citrix.com> wrote: >> From: Jan Beulich [mailto:JBeulich@suse.com] >> Sent: 23 November 2016 09:24 >> >> We should not consume the second slot if it didn't get written yet. >> Normal writers - i.e. Xen - would not update write_pointer between the >> two writes, but the page may get fiddled with by the guest itself, and >> we're better off entering an infinite loop in that case. >> > > Xen would never put QEMU in this situation and the guest can't actually > modify the page whilst it's in use, since activation of the IOREQ server > removes the page from the guest's p2m so the premise of the patch is not > correct. Is that the case even for pre-ioreq-server Xen versions? The issue here was reported together with what became XSA-197, and it's not been assigned its own XSA just because there are other ways for a guest to place high load on its qemu process (and there are ways to deal with such high load situations). Jan ^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [Qemu-devel] [PATCH 1/3] xen: fix quad word bufioreq handling 2016-11-23 10:36 ` [Qemu-devel] " Jan Beulich @ 2016-11-23 10:45 ` Paul Durrant 2016-11-23 11:28 ` Jan Beulich 2016-11-23 11:28 ` [Qemu-devel] " Jan Beulich 2016-11-23 10:45 ` Paul Durrant 1 sibling, 2 replies; 27+ messages in thread From: Paul Durrant @ 2016-11-23 10:45 UTC (permalink / raw) To: Jan Beulich; +Cc: Anthony Perard, Stefano Stabellini, xen-devel, qemu-devel > -----Original Message----- > From: Jan Beulich [mailto:JBeulich@suse.com] > Sent: 23 November 2016 10:36 > To: Paul Durrant <Paul.Durrant@citrix.com> > Cc: Anthony Perard <anthony.perard@citrix.com>; Stefano Stabellini > <sstabellini@kernel.org>; xen-devel <xen-devel@lists.xenproject.org>; > qemu-devel@nongnu.org > Subject: RE: [PATCH 1/3] xen: fix quad word bufioreq handling > > >>> On 23.11.16 at 10:48, <Paul.Durrant@citrix.com> wrote: > >> From: Jan Beulich [mailto:JBeulich@suse.com] > >> Sent: 23 November 2016 09:24 > >> > >> We should not consume the second slot if it didn't get written yet. > >> Normal writers - i.e. Xen - would not update write_pointer between the > >> two writes, but the page may get fiddled with by the guest itself, and > >> we're better off entering an infinite loop in that case. > >> > > > > Xen would never put QEMU in this situation and the guest can't actually > > modify the page whilst it's in use, since activation of the IOREQ server > > removes the page from the guest's p2m so the premise of the patch is not > > correct. > > Is that the case even for pre-ioreq-server Xen versions? The issue > here was reported together with what became XSA-197, and it's > not been assigned its own XSA just because there are other ways > for a guest to place high load on its qemu process (and there are > ways to deal with such high load situations). > No, if QEMU is using a default ioreq server (i.e. the legacy way of doing things) then it's vulnerable to the guest messing with the rings and I'd forgotten that migrated-in guests from old QEMUs also end up using the default server, so I guess this is a worthy checkt to make... although maybe it's best to just bail if the check fails, since it would indicate a malicious guest. Paul > Jan ^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 1/3] xen: fix quad word bufioreq handling 2016-11-23 10:45 ` Paul Durrant @ 2016-11-23 11:28 ` Jan Beulich 2016-11-23 11:28 ` [Qemu-devel] " Jan Beulich 1 sibling, 0 replies; 27+ messages in thread From: Jan Beulich @ 2016-11-23 11:28 UTC (permalink / raw) To: Paul Durrant; +Cc: Anthony Perard, xen-devel, Stefano Stabellini, qemu-devel >>> On 23.11.16 at 11:45, <Paul.Durrant@citrix.com> wrote: > No, if QEMU is using a default ioreq server (i.e. the legacy way of doing > things) then it's vulnerable to the guest messing with the rings and I'd > forgotten that migrated-in guests from old QEMUs also end up using the default > server, so I guess this is a worthy checkt to make... although maybe it's > best to just bail if the check fails, since it would indicate a malicious > guest. Okay, that's basically the TBD note I have in the patch; I'll wait for at least one of the qemu maintainers to voice their preference. Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [Qemu-devel] [PATCH 1/3] xen: fix quad word bufioreq handling 2016-11-23 10:45 ` Paul Durrant 2016-11-23 11:28 ` Jan Beulich @ 2016-11-23 11:28 ` Jan Beulich 2016-11-23 18:01 ` Stefano Stabellini 1 sibling, 1 reply; 27+ messages in thread From: Jan Beulich @ 2016-11-23 11:28 UTC (permalink / raw) To: Paul Durrant; +Cc: Anthony Perard, Stefano Stabellini, xen-devel, qemu-devel >>> On 23.11.16 at 11:45, <Paul.Durrant@citrix.com> wrote: > No, if QEMU is using a default ioreq server (i.e. the legacy way of doing > things) then it's vulnerable to the guest messing with the rings and I'd > forgotten that migrated-in guests from old QEMUs also end up using the default > server, so I guess this is a worthy checkt to make... although maybe it's > best to just bail if the check fails, since it would indicate a malicious > guest. Okay, that's basically the TBD note I have in the patch; I'll wait for at least one of the qemu maintainers to voice their preference. Jan ^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [Qemu-devel] [PATCH 1/3] xen: fix quad word bufioreq handling 2016-11-23 11:28 ` [Qemu-devel] " Jan Beulich @ 2016-11-23 18:01 ` Stefano Stabellini 0 siblings, 0 replies; 27+ messages in thread From: Stefano Stabellini @ 2016-11-23 18:01 UTC (permalink / raw) To: Jan Beulich Cc: Paul Durrant, Anthony Perard, Stefano Stabellini, xen-devel, qemu-devel On Wed, 23 Nov 2016, Jan Beulich wrote: > >>> On 23.11.16 at 11:45, <Paul.Durrant@citrix.com> wrote: > > No, if QEMU is using a default ioreq server (i.e. the legacy way of doing > > things) then it's vulnerable to the guest messing with the rings and I'd > > forgotten that migrated-in guests from old QEMUs also end up using the default > > server, so I guess this is a worthy checkt to make... although maybe it's > > best to just bail if the check fails, since it would indicate a malicious > > guest. > > Okay, that's basically the TBD note I have in the patch; I'll wait for > at least one of the qemu maintainers to voice their preference. I think we should just print an error and destroy_hvm_domain(false) or hw_error if the check fails. ^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 1/3] xen: fix quad word bufioreq handling @ 2016-11-23 18:01 ` Stefano Stabellini 0 siblings, 0 replies; 27+ messages in thread From: Stefano Stabellini @ 2016-11-23 18:01 UTC (permalink / raw) To: Jan Beulich Cc: Anthony Perard, xen-devel, Paul Durrant, Stefano Stabellini, qemu-devel On Wed, 23 Nov 2016, Jan Beulich wrote: > >>> On 23.11.16 at 11:45, <Paul.Durrant@citrix.com> wrote: > > No, if QEMU is using a default ioreq server (i.e. the legacy way of doing > > things) then it's vulnerable to the guest messing with the rings and I'd > > forgotten that migrated-in guests from old QEMUs also end up using the default > > server, so I guess this is a worthy checkt to make... although maybe it's > > best to just bail if the check fails, since it would indicate a malicious > > guest. > > Okay, that's basically the TBD note I have in the patch; I'll wait for > at least one of the qemu maintainers to voice their preference. I think we should just print an error and destroy_hvm_domain(false) or hw_error if the check fails. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 1/3] xen: fix quad word bufioreq handling 2016-11-23 10:36 ` [Qemu-devel] " Jan Beulich 2016-11-23 10:45 ` Paul Durrant @ 2016-11-23 10:45 ` Paul Durrant 1 sibling, 0 replies; 27+ messages in thread From: Paul Durrant @ 2016-11-23 10:45 UTC (permalink / raw) To: Jan Beulich; +Cc: Anthony Perard, xen-devel, Stefano Stabellini, qemu-devel > -----Original Message----- > From: Jan Beulich [mailto:JBeulich@suse.com] > Sent: 23 November 2016 10:36 > To: Paul Durrant <Paul.Durrant@citrix.com> > Cc: Anthony Perard <anthony.perard@citrix.com>; Stefano Stabellini > <sstabellini@kernel.org>; xen-devel <xen-devel@lists.xenproject.org>; > qemu-devel@nongnu.org > Subject: RE: [PATCH 1/3] xen: fix quad word bufioreq handling > > >>> On 23.11.16 at 10:48, <Paul.Durrant@citrix.com> wrote: > >> From: Jan Beulich [mailto:JBeulich@suse.com] > >> Sent: 23 November 2016 09:24 > >> > >> We should not consume the second slot if it didn't get written yet. > >> Normal writers - i.e. Xen - would not update write_pointer between the > >> two writes, but the page may get fiddled with by the guest itself, and > >> we're better off entering an infinite loop in that case. > >> > > > > Xen would never put QEMU in this situation and the guest can't actually > > modify the page whilst it's in use, since activation of the IOREQ server > > removes the page from the guest's p2m so the premise of the patch is not > > correct. > > Is that the case even for pre-ioreq-server Xen versions? The issue > here was reported together with what became XSA-197, and it's > not been assigned its own XSA just because there are other ways > for a guest to place high load on its qemu process (and there are > ways to deal with such high load situations). > No, if QEMU is using a default ioreq server (i.e. the legacy way of doing things) then it's vulnerable to the guest messing with the rings and I'd forgotten that migrated-in guests from old QEMUs also end up using the default server, so I guess this is a worthy checkt to make... although maybe it's best to just bail if the check fails, since it would indicate a malicious guest. Paul > Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 1/3] xen: fix quad word bufioreq handling 2016-11-23 9:24 ` [Qemu-devel] " Jan Beulich 2016-11-23 9:48 ` Paul Durrant @ 2016-11-23 9:48 ` Paul Durrant 1 sibling, 0 replies; 27+ messages in thread From: Paul Durrant @ 2016-11-23 9:48 UTC (permalink / raw) To: Jan Beulich, qemu-devel; +Cc: Anthony Perard, xen-devel, Stefano Stabellini > -----Original Message----- > From: Jan Beulich [mailto:JBeulich@suse.com] > Sent: 23 November 2016 09:24 > To: qemu-devel@nongnu.org > Cc: Anthony Perard <anthony.perard@citrix.com>; Paul Durrant > <Paul.Durrant@citrix.com>; Stefano Stabellini <sstabellini@kernel.org>; xen- > devel <xen-devel@lists.xenproject.org> > Subject: [PATCH 1/3] xen: fix quad word bufioreq handling > > We should not consume the second slot if it didn't get written yet. > Normal writers - i.e. Xen - would not update write_pointer between the > two writes, but the page may get fiddled with by the guest itself, and > we're better off entering an infinite loop in that case. > Xen would never put QEMU in this situation and the guest can't actually modify the page whilst it's in use, since activation of the IOREQ server removes the page from the guest's p2m so the premise of the patch is not correct. Paul > Reported-by: yanghongke <yanghongke@huawei.com> > Signed-off-by: Jan Beulich <jbeulich@suse.com> > --- > TBD: Alternatively we could call e.g. hw_error() instead. > > --- a/xen-hvm.c > +++ b/xen-hvm.c > @@ -1021,6 +1021,9 @@ static int handle_buffered_iopage(XenIOS > xen_rmb(); > qw = (req.size == 8); > if (qw) { > + if (rdptr + 1 == wrptr) { > + break; > + } > buf_req = &buf_page->buf_ioreq[(rdptr + 1) % > IOREQ_BUFFER_SLOT_NUM]; > req.data |= ((uint64_t)buf_req->data) << 32; > > _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 27+ messages in thread
* [Qemu-devel] [PATCH 2/3] xen: slightly simplify bufioreq handling 2016-11-23 9:20 [Qemu-devel] [PATCH 0/3] xen: XSA-197 follow-ups Jan Beulich @ 2016-11-23 9:24 ` Jan Beulich 2016-11-23 9:24 ` [Qemu-devel] " Jan Beulich ` (2 subsequent siblings) 3 siblings, 0 replies; 27+ messages in thread From: Jan Beulich @ 2016-11-23 9:24 UTC (permalink / raw) To: qemu-devel; +Cc: anthony.perard, Paul Durrant, Stefano Stabellini, xen-devel There's no point setting fields always receiving the same value on each iteration, as handle_ioreq() doesn't alter them anyway. Set state and count once ahead of the loop, drop the redundant clearing of data_is_ptr, and avoid the meaningless setting of df altogether. Also avoid doing an unsigned long calculation of size when the field to be initialized is only 32 bits wide (and the shift value in the range 0...3). Signed-off-by: Jan Beulich <jbeulich@suse.com> --- a/xen-hvm.c +++ b/xen-hvm.c @@ -995,6 +995,8 @@ static int handle_buffered_iopage(XenIOS } memset(&req, 0x00, sizeof(req)); + req.state = STATE_IOREQ_READY; + req.count = 1; for (;;) { uint32_t rdptr = buf_page->read_pointer, wrptr; @@ -1009,15 +1011,11 @@ static int handle_buffered_iopage(XenIOS break; } buf_req = &buf_page->buf_ioreq[rdptr % IOREQ_BUFFER_SLOT_NUM]; - req.size = 1UL << buf_req->size; - req.count = 1; + req.size = 1U << buf_req->size; req.addr = buf_req->addr; req.data = buf_req->data; - req.state = STATE_IOREQ_READY; req.dir = buf_req->dir; - req.df = 1; req.type = buf_req->type; - req.data_is_ptr = 0; xen_rmb(); qw = (req.size == 8); if (qw) { @@ -1032,6 +1030,13 @@ static int handle_buffered_iopage(XenIOS handle_ioreq(state, &req); + /* Only req.data may get updated by handle_ioreq(), albeit even that + * should not happen as such data would never make it to the guest. + */ + assert(req.state == STATE_IOREQ_READY); + assert(req.count == 1); + assert(!req.data_is_ptr); + atomic_add(&buf_page->read_pointer, qw + 1); } ^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH 2/3] xen: slightly simplify bufioreq handling @ 2016-11-23 9:24 ` Jan Beulich 0 siblings, 0 replies; 27+ messages in thread From: Jan Beulich @ 2016-11-23 9:24 UTC (permalink / raw) To: qemu-devel; +Cc: anthony.perard, xen-devel, Paul Durrant, Stefano Stabellini There's no point setting fields always receiving the same value on each iteration, as handle_ioreq() doesn't alter them anyway. Set state and count once ahead of the loop, drop the redundant clearing of data_is_ptr, and avoid the meaningless setting of df altogether. Also avoid doing an unsigned long calculation of size when the field to be initialized is only 32 bits wide (and the shift value in the range 0...3). Signed-off-by: Jan Beulich <jbeulich@suse.com> --- a/xen-hvm.c +++ b/xen-hvm.c @@ -995,6 +995,8 @@ static int handle_buffered_iopage(XenIOS } memset(&req, 0x00, sizeof(req)); + req.state = STATE_IOREQ_READY; + req.count = 1; for (;;) { uint32_t rdptr = buf_page->read_pointer, wrptr; @@ -1009,15 +1011,11 @@ static int handle_buffered_iopage(XenIOS break; } buf_req = &buf_page->buf_ioreq[rdptr % IOREQ_BUFFER_SLOT_NUM]; - req.size = 1UL << buf_req->size; - req.count = 1; + req.size = 1U << buf_req->size; req.addr = buf_req->addr; req.data = buf_req->data; - req.state = STATE_IOREQ_READY; req.dir = buf_req->dir; - req.df = 1; req.type = buf_req->type; - req.data_is_ptr = 0; xen_rmb(); qw = (req.size == 8); if (qw) { @@ -1032,6 +1030,13 @@ static int handle_buffered_iopage(XenIOS handle_ioreq(state, &req); + /* Only req.data may get updated by handle_ioreq(), albeit even that + * should not happen as such data would never make it to the guest. + */ + assert(req.state == STATE_IOREQ_READY); + assert(req.count == 1); + assert(!req.data_is_ptr); + atomic_add(&buf_page->read_pointer, qw + 1); } _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [Qemu-devel] [PATCH 2/3] xen: slightly simplify bufioreq handling 2016-11-23 9:24 ` Jan Beulich @ 2016-11-23 9:51 ` Paul Durrant -1 siblings, 0 replies; 27+ messages in thread From: Paul Durrant @ 2016-11-23 9:51 UTC (permalink / raw) To: Jan Beulich, qemu-devel; +Cc: Anthony Perard, Stefano Stabellini, xen-devel > -----Original Message----- > From: Jan Beulich [mailto:JBeulich@suse.com] > Sent: 23 November 2016 09:25 > To: qemu-devel@nongnu.org > Cc: Anthony Perard <anthony.perard@citrix.com>; Paul Durrant > <Paul.Durrant@citrix.com>; Stefano Stabellini <sstabellini@kernel.org>; xen- > devel <xen-devel@lists.xenproject.org> > Subject: [PATCH 2/3] xen: slightly simplify bufioreq handling > > There's no point setting fields always receiving the same value on each > iteration, as handle_ioreq() doesn't alter them anyway. Set state and > count once ahead of the loop, drop the redundant clearing of > data_is_ptr, and avoid the meaningless setting of df altogether. > > Also avoid doing an unsigned long calculation of size when the field to > be initialized is only 32 bits wide (and the shift value in the range > 0...3). > > Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Paul Durrant <paul.durrant@citrix.com> > > --- a/xen-hvm.c > +++ b/xen-hvm.c > @@ -995,6 +995,8 @@ static int handle_buffered_iopage(XenIOS > } > > memset(&req, 0x00, sizeof(req)); > + req.state = STATE_IOREQ_READY; > + req.count = 1; > > for (;;) { > uint32_t rdptr = buf_page->read_pointer, wrptr; > @@ -1009,15 +1011,11 @@ static int handle_buffered_iopage(XenIOS > break; > } > buf_req = &buf_page->buf_ioreq[rdptr % IOREQ_BUFFER_SLOT_NUM]; > - req.size = 1UL << buf_req->size; > - req.count = 1; > + req.size = 1U << buf_req->size; > req.addr = buf_req->addr; > req.data = buf_req->data; > - req.state = STATE_IOREQ_READY; > req.dir = buf_req->dir; > - req.df = 1; > req.type = buf_req->type; > - req.data_is_ptr = 0; > xen_rmb(); > qw = (req.size == 8); > if (qw) { > @@ -1032,6 +1030,13 @@ static int handle_buffered_iopage(XenIOS > > handle_ioreq(state, &req); > > + /* Only req.data may get updated by handle_ioreq(), albeit even that > + * should not happen as such data would never make it to the guest. > + */ > + assert(req.state == STATE_IOREQ_READY); > + assert(req.count == 1); > + assert(!req.data_is_ptr); > + > atomic_add(&buf_page->read_pointer, qw + 1); > } > > > ^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 2/3] xen: slightly simplify bufioreq handling @ 2016-11-23 9:51 ` Paul Durrant 0 siblings, 0 replies; 27+ messages in thread From: Paul Durrant @ 2016-11-23 9:51 UTC (permalink / raw) To: Jan Beulich, qemu-devel; +Cc: Anthony Perard, xen-devel, Stefano Stabellini > -----Original Message----- > From: Jan Beulich [mailto:JBeulich@suse.com] > Sent: 23 November 2016 09:25 > To: qemu-devel@nongnu.org > Cc: Anthony Perard <anthony.perard@citrix.com>; Paul Durrant > <Paul.Durrant@citrix.com>; Stefano Stabellini <sstabellini@kernel.org>; xen- > devel <xen-devel@lists.xenproject.org> > Subject: [PATCH 2/3] xen: slightly simplify bufioreq handling > > There's no point setting fields always receiving the same value on each > iteration, as handle_ioreq() doesn't alter them anyway. Set state and > count once ahead of the loop, drop the redundant clearing of > data_is_ptr, and avoid the meaningless setting of df altogether. > > Also avoid doing an unsigned long calculation of size when the field to > be initialized is only 32 bits wide (and the shift value in the range > 0...3). > > Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Paul Durrant <paul.durrant@citrix.com> > > --- a/xen-hvm.c > +++ b/xen-hvm.c > @@ -995,6 +995,8 @@ static int handle_buffered_iopage(XenIOS > } > > memset(&req, 0x00, sizeof(req)); > + req.state = STATE_IOREQ_READY; > + req.count = 1; > > for (;;) { > uint32_t rdptr = buf_page->read_pointer, wrptr; > @@ -1009,15 +1011,11 @@ static int handle_buffered_iopage(XenIOS > break; > } > buf_req = &buf_page->buf_ioreq[rdptr % IOREQ_BUFFER_SLOT_NUM]; > - req.size = 1UL << buf_req->size; > - req.count = 1; > + req.size = 1U << buf_req->size; > req.addr = buf_req->addr; > req.data = buf_req->data; > - req.state = STATE_IOREQ_READY; > req.dir = buf_req->dir; > - req.df = 1; > req.type = buf_req->type; > - req.data_is_ptr = 0; > xen_rmb(); > qw = (req.size == 8); > if (qw) { > @@ -1032,6 +1030,13 @@ static int handle_buffered_iopage(XenIOS > > handle_ioreq(state, &req); > > + /* Only req.data may get updated by handle_ioreq(), albeit even that > + * should not happen as such data would never make it to the guest. > + */ > + assert(req.state == STATE_IOREQ_READY); > + assert(req.count == 1); > + assert(!req.data_is_ptr); > + > atomic_add(&buf_page->read_pointer, qw + 1); > } > > > _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 2/3] xen: slightly simplify bufioreq handling 2016-11-23 9:24 ` Jan Beulich (?) (?) @ 2016-11-23 18:13 ` Stefano Stabellini -1 siblings, 0 replies; 27+ messages in thread From: Stefano Stabellini @ 2016-11-23 18:13 UTC (permalink / raw) To: Jan Beulich Cc: anthony.perard, xen-devel, Paul Durrant, Stefano Stabellini, qemu-devel On Wed, 23 Nov 2016, Jan Beulich wrote: > There's no point setting fields always receiving the same value on each > iteration, as handle_ioreq() doesn't alter them anyway. Set state and > count once ahead of the loop, drop the redundant clearing of > data_is_ptr, and avoid the meaningless setting of df altogether. Why setting df is meaningless? > Also avoid doing an unsigned long calculation of size when the field to > be initialized is only 32 bits wide (and the shift value in the range > 0...3). > > Signed-off-by: Jan Beulich <jbeulich@suse.com> > > --- a/xen-hvm.c > +++ b/xen-hvm.c > @@ -995,6 +995,8 @@ static int handle_buffered_iopage(XenIOS > } > > memset(&req, 0x00, sizeof(req)); > + req.state = STATE_IOREQ_READY; > + req.count = 1; > > for (;;) { > uint32_t rdptr = buf_page->read_pointer, wrptr; > @@ -1009,15 +1011,11 @@ static int handle_buffered_iopage(XenIOS > break; > } > buf_req = &buf_page->buf_ioreq[rdptr % IOREQ_BUFFER_SLOT_NUM]; > - req.size = 1UL << buf_req->size; > - req.count = 1; > + req.size = 1U << buf_req->size; > req.addr = buf_req->addr; > req.data = buf_req->data; > - req.state = STATE_IOREQ_READY; > req.dir = buf_req->dir; > - req.df = 1; > req.type = buf_req->type; > - req.data_is_ptr = 0; > xen_rmb(); > qw = (req.size == 8); > if (qw) { > @@ -1032,6 +1030,13 @@ static int handle_buffered_iopage(XenIOS > > handle_ioreq(state, &req); > > + /* Only req.data may get updated by handle_ioreq(), albeit even that > + * should not happen as such data would never make it to the guest. > + */ > + assert(req.state == STATE_IOREQ_READY); > + assert(req.count == 1); > + assert(!req.data_is_ptr); > + > atomic_add(&buf_page->read_pointer, qw + 1); > } > > > > _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [Qemu-devel] [PATCH 2/3] xen: slightly simplify bufioreq handling 2016-11-23 9:24 ` Jan Beulich ` (2 preceding siblings ...) (?) @ 2016-11-23 18:13 ` Stefano Stabellini 2016-11-24 10:31 ` Jan Beulich 2016-11-24 10:31 ` Jan Beulich -1 siblings, 2 replies; 27+ messages in thread From: Stefano Stabellini @ 2016-11-23 18:13 UTC (permalink / raw) To: Jan Beulich Cc: qemu-devel, anthony.perard, Paul Durrant, Stefano Stabellini, xen-devel On Wed, 23 Nov 2016, Jan Beulich wrote: > There's no point setting fields always receiving the same value on each > iteration, as handle_ioreq() doesn't alter them anyway. Set state and > count once ahead of the loop, drop the redundant clearing of > data_is_ptr, and avoid the meaningless setting of df altogether. Why setting df is meaningless? > Also avoid doing an unsigned long calculation of size when the field to > be initialized is only 32 bits wide (and the shift value in the range > 0...3). > > Signed-off-by: Jan Beulich <jbeulich@suse.com> > > --- a/xen-hvm.c > +++ b/xen-hvm.c > @@ -995,6 +995,8 @@ static int handle_buffered_iopage(XenIOS > } > > memset(&req, 0x00, sizeof(req)); > + req.state = STATE_IOREQ_READY; > + req.count = 1; > > for (;;) { > uint32_t rdptr = buf_page->read_pointer, wrptr; > @@ -1009,15 +1011,11 @@ static int handle_buffered_iopage(XenIOS > break; > } > buf_req = &buf_page->buf_ioreq[rdptr % IOREQ_BUFFER_SLOT_NUM]; > - req.size = 1UL << buf_req->size; > - req.count = 1; > + req.size = 1U << buf_req->size; > req.addr = buf_req->addr; > req.data = buf_req->data; > - req.state = STATE_IOREQ_READY; > req.dir = buf_req->dir; > - req.df = 1; > req.type = buf_req->type; > - req.data_is_ptr = 0; > xen_rmb(); > qw = (req.size == 8); > if (qw) { > @@ -1032,6 +1030,13 @@ static int handle_buffered_iopage(XenIOS > > handle_ioreq(state, &req); > > + /* Only req.data may get updated by handle_ioreq(), albeit even that > + * should not happen as such data would never make it to the guest. > + */ > + assert(req.state == STATE_IOREQ_READY); > + assert(req.count == 1); > + assert(!req.data_is_ptr); > + > atomic_add(&buf_page->read_pointer, qw + 1); > } > > > > ^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [Qemu-devel] [PATCH 2/3] xen: slightly simplify bufioreq handling 2016-11-23 18:13 ` [Qemu-devel] " Stefano Stabellini @ 2016-11-24 10:31 ` Jan Beulich 2016-11-24 10:31 ` Jan Beulich 1 sibling, 0 replies; 27+ messages in thread From: Jan Beulich @ 2016-11-24 10:31 UTC (permalink / raw) To: Stefano Stabellini; +Cc: anthony.perard, Paul Durrant, xen-devel, qemu-devel >>> On 23.11.16 at 19:13, <sstabellini@kernel.org> wrote: > On Wed, 23 Nov 2016, Jan Beulich wrote: >> There's no point setting fields always receiving the same value on each >> iteration, as handle_ioreq() doesn't alter them anyway. Set state and >> count once ahead of the loop, drop the redundant clearing of >> data_is_ptr, and avoid the meaningless setting of df altogether. > > Why setting df is meaningless? With count being fixed to one there's no need to update addresses, and hence no use for knowing which direction the updates should go. Jan ^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 2/3] xen: slightly simplify bufioreq handling 2016-11-23 18:13 ` [Qemu-devel] " Stefano Stabellini 2016-11-24 10:31 ` Jan Beulich @ 2016-11-24 10:31 ` Jan Beulich 1 sibling, 0 replies; 27+ messages in thread From: Jan Beulich @ 2016-11-24 10:31 UTC (permalink / raw) To: Stefano Stabellini; +Cc: anthony.perard, xen-devel, Paul Durrant, qemu-devel >>> On 23.11.16 at 19:13, <sstabellini@kernel.org> wrote: > On Wed, 23 Nov 2016, Jan Beulich wrote: >> There's no point setting fields always receiving the same value on each >> iteration, as handle_ioreq() doesn't alter them anyway. Set state and >> count once ahead of the loop, drop the redundant clearing of >> data_is_ptr, and avoid the meaningless setting of df altogether. > > Why setting df is meaningless? With count being fixed to one there's no need to update addresses, and hence no use for knowing which direction the updates should go. Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 27+ messages in thread
* [Qemu-devel] [PATCH 3/3] xen: ignore direction in bufioreq handling 2016-11-23 9:20 [Qemu-devel] [PATCH 0/3] xen: XSA-197 follow-ups Jan Beulich @ 2016-11-23 9:25 ` Jan Beulich 2016-11-23 9:24 ` [Qemu-devel] " Jan Beulich ` (2 subsequent siblings) 3 siblings, 0 replies; 27+ messages in thread From: Jan Beulich @ 2016-11-23 9:25 UTC (permalink / raw) To: qemu-devel; +Cc: anthony.perard, Paul Durrant, Stefano Stabellini, xen-devel There's no way to communicate back read data, so only writes can ever be usefully specified. Ignore the field, paving the road for eventually re-using the bit for something else in a few (many?) years time. Signed-off-by: Jan Beulich <jbeulich@suse.com> --- a/xen-hvm.c +++ b/xen-hvm.c @@ -997,6 +997,7 @@ static int handle_buffered_iopage(XenIOS memset(&req, 0x00, sizeof(req)); req.state = STATE_IOREQ_READY; req.count = 1; + req.dir = IOREQ_WRITE; for (;;) { uint32_t rdptr = buf_page->read_pointer, wrptr; @@ -1014,7 +1015,6 @@ static int handle_buffered_iopage(XenIOS req.size = 1U << buf_req->size; req.addr = buf_req->addr; req.data = buf_req->data; - req.dir = buf_req->dir; req.type = buf_req->type; xen_rmb(); qw = (req.size == 8); @@ -1031,10 +1031,12 @@ static int handle_buffered_iopage(XenIOS handle_ioreq(state, &req); /* Only req.data may get updated by handle_ioreq(), albeit even that - * should not happen as such data would never make it to the guest. + * should not happen as such data would never make it to the guest (we + * can only usefully see writes here after all). */ assert(req.state == STATE_IOREQ_READY); assert(req.count == 1); + assert(req.dir == IOREQ_WRITE); assert(!req.data_is_ptr); atomic_add(&buf_page->read_pointer, qw + 1); ^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH 3/3] xen: ignore direction in bufioreq handling @ 2016-11-23 9:25 ` Jan Beulich 0 siblings, 0 replies; 27+ messages in thread From: Jan Beulich @ 2016-11-23 9:25 UTC (permalink / raw) To: qemu-devel; +Cc: anthony.perard, xen-devel, Paul Durrant, Stefano Stabellini There's no way to communicate back read data, so only writes can ever be usefully specified. Ignore the field, paving the road for eventually re-using the bit for something else in a few (many?) years time. Signed-off-by: Jan Beulich <jbeulich@suse.com> --- a/xen-hvm.c +++ b/xen-hvm.c @@ -997,6 +997,7 @@ static int handle_buffered_iopage(XenIOS memset(&req, 0x00, sizeof(req)); req.state = STATE_IOREQ_READY; req.count = 1; + req.dir = IOREQ_WRITE; for (;;) { uint32_t rdptr = buf_page->read_pointer, wrptr; @@ -1014,7 +1015,6 @@ static int handle_buffered_iopage(XenIOS req.size = 1U << buf_req->size; req.addr = buf_req->addr; req.data = buf_req->data; - req.dir = buf_req->dir; req.type = buf_req->type; xen_rmb(); qw = (req.size == 8); @@ -1031,10 +1031,12 @@ static int handle_buffered_iopage(XenIOS handle_ioreq(state, &req); /* Only req.data may get updated by handle_ioreq(), albeit even that - * should not happen as such data would never make it to the guest. + * should not happen as such data would never make it to the guest (we + * can only usefully see writes here after all). */ assert(req.state == STATE_IOREQ_READY); assert(req.count == 1); + assert(req.dir == IOREQ_WRITE); assert(!req.data_is_ptr); atomic_add(&buf_page->read_pointer, qw + 1); _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [Qemu-devel] [PATCH 3/3] xen: ignore direction in bufioreq handling 2016-11-23 9:25 ` Jan Beulich @ 2016-11-23 9:55 ` Paul Durrant -1 siblings, 0 replies; 27+ messages in thread From: Paul Durrant @ 2016-11-23 9:55 UTC (permalink / raw) To: Jan Beulich, qemu-devel; +Cc: Anthony Perard, Stefano Stabellini, xen-devel > -----Original Message----- > From: Jan Beulich [mailto:JBeulich@suse.com] > Sent: 23 November 2016 09:25 > To: qemu-devel@nongnu.org > Cc: Anthony Perard <anthony.perard@citrix.com>; Paul Durrant > <Paul.Durrant@citrix.com>; Stefano Stabellini <sstabellini@kernel.org>; xen- > devel <xen-devel@lists.xenproject.org> > Subject: [PATCH 3/3] xen: ignore direction in bufioreq handling > > There's no way to communicate back read data, so only writes can ever > be usefully specified. Ignore the field, paving the road for eventually > re-using the bit for something else in a few (many?) years time. > > Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Paul Durrant <paul.durrant@citrix.com> > > --- a/xen-hvm.c > +++ b/xen-hvm.c > @@ -997,6 +997,7 @@ static int handle_buffered_iopage(XenIOS > memset(&req, 0x00, sizeof(req)); > req.state = STATE_IOREQ_READY; > req.count = 1; > + req.dir = IOREQ_WRITE; > > for (;;) { > uint32_t rdptr = buf_page->read_pointer, wrptr; > @@ -1014,7 +1015,6 @@ static int handle_buffered_iopage(XenIOS > req.size = 1U << buf_req->size; > req.addr = buf_req->addr; > req.data = buf_req->data; > - req.dir = buf_req->dir; > req.type = buf_req->type; > xen_rmb(); > qw = (req.size == 8); > @@ -1031,10 +1031,12 @@ static int handle_buffered_iopage(XenIOS > handle_ioreq(state, &req); > > /* Only req.data may get updated by handle_ioreq(), albeit even that > - * should not happen as such data would never make it to the guest. > + * should not happen as such data would never make it to the guest (we > + * can only usefully see writes here after all). > */ > assert(req.state == STATE_IOREQ_READY); > assert(req.count == 1); > + assert(req.dir == IOREQ_WRITE); > assert(!req.data_is_ptr); > > atomic_add(&buf_page->read_pointer, qw + 1); > > ^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 3/3] xen: ignore direction in bufioreq handling @ 2016-11-23 9:55 ` Paul Durrant 0 siblings, 0 replies; 27+ messages in thread From: Paul Durrant @ 2016-11-23 9:55 UTC (permalink / raw) To: Jan Beulich, qemu-devel; +Cc: Anthony Perard, xen-devel, Stefano Stabellini > -----Original Message----- > From: Jan Beulich [mailto:JBeulich@suse.com] > Sent: 23 November 2016 09:25 > To: qemu-devel@nongnu.org > Cc: Anthony Perard <anthony.perard@citrix.com>; Paul Durrant > <Paul.Durrant@citrix.com>; Stefano Stabellini <sstabellini@kernel.org>; xen- > devel <xen-devel@lists.xenproject.org> > Subject: [PATCH 3/3] xen: ignore direction in bufioreq handling > > There's no way to communicate back read data, so only writes can ever > be usefully specified. Ignore the field, paving the road for eventually > re-using the bit for something else in a few (many?) years time. > > Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Paul Durrant <paul.durrant@citrix.com> > > --- a/xen-hvm.c > +++ b/xen-hvm.c > @@ -997,6 +997,7 @@ static int handle_buffered_iopage(XenIOS > memset(&req, 0x00, sizeof(req)); > req.state = STATE_IOREQ_READY; > req.count = 1; > + req.dir = IOREQ_WRITE; > > for (;;) { > uint32_t rdptr = buf_page->read_pointer, wrptr; > @@ -1014,7 +1015,6 @@ static int handle_buffered_iopage(XenIOS > req.size = 1U << buf_req->size; > req.addr = buf_req->addr; > req.data = buf_req->data; > - req.dir = buf_req->dir; > req.type = buf_req->type; > xen_rmb(); > qw = (req.size == 8); > @@ -1031,10 +1031,12 @@ static int handle_buffered_iopage(XenIOS > handle_ioreq(state, &req); > > /* Only req.data may get updated by handle_ioreq(), albeit even that > - * should not happen as such data would never make it to the guest. > + * should not happen as such data would never make it to the guest (we > + * can only usefully see writes here after all). > */ > assert(req.state == STATE_IOREQ_READY); > assert(req.count == 1); > + assert(req.dir == IOREQ_WRITE); > assert(!req.data_is_ptr); > > atomic_add(&buf_page->read_pointer, qw + 1); > > _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 3/3] xen: ignore direction in bufioreq handling 2016-11-23 9:55 ` Paul Durrant (?) @ 2016-11-23 18:16 ` Stefano Stabellini -1 siblings, 0 replies; 27+ messages in thread From: Stefano Stabellini @ 2016-11-23 18:16 UTC (permalink / raw) To: Paul Durrant Cc: Anthony Perard, xen-devel, Stefano Stabellini, qemu-devel, Jan Beulich On Wed, 23 Nov 2016, Paul Durrant wrote: > > -----Original Message----- > > From: Jan Beulich [mailto:JBeulich@suse.com] > > Sent: 23 November 2016 09:25 > > To: qemu-devel@nongnu.org > > Cc: Anthony Perard <anthony.perard@citrix.com>; Paul Durrant > > <Paul.Durrant@citrix.com>; Stefano Stabellini <sstabellini@kernel.org>; xen- > > devel <xen-devel@lists.xenproject.org> > > Subject: [PATCH 3/3] xen: ignore direction in bufioreq handling > > > > There's no way to communicate back read data, so only writes can ever > > be usefully specified. Ignore the field, paving the road for eventually > > re-using the bit for something else in a few (many?) years time. > > > > Signed-off-by: Jan Beulich <jbeulich@suse.com> > > Reviewed-by: Paul Durrant <paul.durrant@citrix.com> Acked-by: Stefano Stabellini <sstabellini@kernel.org> > > > > --- a/xen-hvm.c > > +++ b/xen-hvm.c > > @@ -997,6 +997,7 @@ static int handle_buffered_iopage(XenIOS > > memset(&req, 0x00, sizeof(req)); > > req.state = STATE_IOREQ_READY; > > req.count = 1; > > + req.dir = IOREQ_WRITE; > > > > for (;;) { > > uint32_t rdptr = buf_page->read_pointer, wrptr; > > @@ -1014,7 +1015,6 @@ static int handle_buffered_iopage(XenIOS > > req.size = 1U << buf_req->size; > > req.addr = buf_req->addr; > > req.data = buf_req->data; > > - req.dir = buf_req->dir; > > req.type = buf_req->type; > > xen_rmb(); > > qw = (req.size == 8); > > @@ -1031,10 +1031,12 @@ static int handle_buffered_iopage(XenIOS > > handle_ioreq(state, &req); > > > > /* Only req.data may get updated by handle_ioreq(), albeit even that > > - * should not happen as such data would never make it to the guest. > > + * should not happen as such data would never make it to the guest (we > > + * can only usefully see writes here after all). > > */ > > assert(req.state == STATE_IOREQ_READY); > > assert(req.count == 1); > > + assert(req.dir == IOREQ_WRITE); > > assert(!req.data_is_ptr); > > > > atomic_add(&buf_page->read_pointer, qw + 1); > > > > > _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel ^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [Qemu-devel] [PATCH 3/3] xen: ignore direction in bufioreq handling 2016-11-23 9:55 ` Paul Durrant (?) (?) @ 2016-11-23 18:16 ` Stefano Stabellini -1 siblings, 0 replies; 27+ messages in thread From: Stefano Stabellini @ 2016-11-23 18:16 UTC (permalink / raw) To: Paul Durrant Cc: Jan Beulich, qemu-devel, Anthony Perard, Stefano Stabellini, xen-devel On Wed, 23 Nov 2016, Paul Durrant wrote: > > -----Original Message----- > > From: Jan Beulich [mailto:JBeulich@suse.com] > > Sent: 23 November 2016 09:25 > > To: qemu-devel@nongnu.org > > Cc: Anthony Perard <anthony.perard@citrix.com>; Paul Durrant > > <Paul.Durrant@citrix.com>; Stefano Stabellini <sstabellini@kernel.org>; xen- > > devel <xen-devel@lists.xenproject.org> > > Subject: [PATCH 3/3] xen: ignore direction in bufioreq handling > > > > There's no way to communicate back read data, so only writes can ever > > be usefully specified. Ignore the field, paving the road for eventually > > re-using the bit for something else in a few (many?) years time. > > > > Signed-off-by: Jan Beulich <jbeulich@suse.com> > > Reviewed-by: Paul Durrant <paul.durrant@citrix.com> Acked-by: Stefano Stabellini <sstabellini@kernel.org> > > > > --- a/xen-hvm.c > > +++ b/xen-hvm.c > > @@ -997,6 +997,7 @@ static int handle_buffered_iopage(XenIOS > > memset(&req, 0x00, sizeof(req)); > > req.state = STATE_IOREQ_READY; > > req.count = 1; > > + req.dir = IOREQ_WRITE; > > > > for (;;) { > > uint32_t rdptr = buf_page->read_pointer, wrptr; > > @@ -1014,7 +1015,6 @@ static int handle_buffered_iopage(XenIOS > > req.size = 1U << buf_req->size; > > req.addr = buf_req->addr; > > req.data = buf_req->data; > > - req.dir = buf_req->dir; > > req.type = buf_req->type; > > xen_rmb(); > > qw = (req.size == 8); > > @@ -1031,10 +1031,12 @@ static int handle_buffered_iopage(XenIOS > > handle_ioreq(state, &req); > > > > /* Only req.data may get updated by handle_ioreq(), albeit even that > > - * should not happen as such data would never make it to the guest. > > + * should not happen as such data would never make it to the guest (we > > + * can only usefully see writes here after all). > > */ > > assert(req.state == STATE_IOREQ_READY); > > assert(req.count == 1); > > + assert(req.dir == IOREQ_WRITE); > > assert(!req.data_is_ptr); > > > > atomic_add(&buf_page->read_pointer, qw + 1); > > > > > ^ permalink raw reply [flat|nested] 27+ messages in thread
end of thread, other threads:[~2016-11-24 10:31 UTC | newest] Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2016-11-23 9:20 [Qemu-devel] [PATCH 0/3] xen: XSA-197 follow-ups Jan Beulich 2016-11-23 9:24 ` [PATCH 1/3] xen: fix quad word bufioreq handling Jan Beulich 2016-11-23 9:24 ` [Qemu-devel] " Jan Beulich 2016-11-23 9:48 ` Paul Durrant 2016-11-23 10:36 ` Jan Beulich 2016-11-23 10:36 ` [Qemu-devel] " Jan Beulich 2016-11-23 10:45 ` Paul Durrant 2016-11-23 11:28 ` Jan Beulich 2016-11-23 11:28 ` [Qemu-devel] " Jan Beulich 2016-11-23 18:01 ` Stefano Stabellini 2016-11-23 18:01 ` Stefano Stabellini 2016-11-23 10:45 ` Paul Durrant 2016-11-23 9:48 ` Paul Durrant 2016-11-23 9:24 ` [Qemu-devel] [PATCH 2/3] xen: slightly simplify " Jan Beulich 2016-11-23 9:24 ` Jan Beulich 2016-11-23 9:51 ` [Qemu-devel] " Paul Durrant 2016-11-23 9:51 ` Paul Durrant 2016-11-23 18:13 ` Stefano Stabellini 2016-11-23 18:13 ` [Qemu-devel] " Stefano Stabellini 2016-11-24 10:31 ` Jan Beulich 2016-11-24 10:31 ` Jan Beulich 2016-11-23 9:25 ` [Qemu-devel] [PATCH 3/3] xen: ignore direction in " Jan Beulich 2016-11-23 9:25 ` Jan Beulich 2016-11-23 9:55 ` [Qemu-devel] " Paul Durrant 2016-11-23 9:55 ` Paul Durrant 2016-11-23 18:16 ` Stefano Stabellini 2016-11-23 18:16 ` [Qemu-devel] " Stefano Stabellini
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.