All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/6] Fix XSA-155-like bugs in frontend drivers
@ 2018-04-30 21:01 Marek Marczykowski-Górecki
  2018-04-30 21:01 ` [PATCH 1/6] xen: Add RING_COPY_RESPONSE() Marek Marczykowski-Górecki
                   ` (9 more replies)
  0 siblings, 10 replies; 31+ messages in thread
From: Marek Marczykowski-Górecki @ 2018-04-30 21:01 UTC (permalink / raw)
  To: xen-devel
  Cc: Marek Marczykowski-Górecki, Roger Pau Monné,
	Boris Ostrovsky, Greg Kroah-Hartman, Jens Axboe, Juergen Gross,
	Konrad Rzeszutek Wilk, Stefano Stabellini, open list:BLOCK LAYER,
	open list, open list:NETWORKING DRIVERS, stable

Patches in original Xen Security Advisory 155 cared only about backend drivers
while leaving frontend patches to be "developed and released (publicly) after
the embargo date". This is said series.

Marek Marczykowski-Górecki (6):
  xen: Add RING_COPY_RESPONSE()
  xen-netfront: copy response out of shared buffer before accessing it
  xen-netfront: do not use data already exposed to backend
  xen-netfront: add range check for Tx response id
  xen-blkfront: make local copy of response before using it
  xen-blkfront: prepare request locally, only then put it on the shared ring

 drivers/block/xen-blkfront.c    | 110 ++++++++++++++++++---------------
 drivers/net/xen-netfront.c      |  61 +++++++++---------
 include/xen/interface/io/ring.h |  14 ++++-
 3 files changed, 106 insertions(+), 79 deletions(-)

base-commit: 6d08b06e67cd117f6992c46611dfb4ce267cd71e
-- 
git-series 0.9.1

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH 1/6] xen: Add RING_COPY_RESPONSE()
  2018-04-30 21:01 [PATCH 0/6] Fix XSA-155-like bugs in frontend drivers Marek Marczykowski-Górecki
@ 2018-04-30 21:01 ` Marek Marczykowski-Górecki
  2018-04-30 21:25   ` Boris Ostrovsky
  2018-04-30 21:25   ` Boris Ostrovsky
  2018-04-30 21:01 ` Marek Marczykowski-Górecki
                   ` (8 subsequent siblings)
  9 siblings, 2 replies; 31+ messages in thread
From: Marek Marczykowski-Górecki @ 2018-04-30 21:01 UTC (permalink / raw)
  To: xen-devel
  Cc: Marek Marczykowski-Górecki, stable, Boris Ostrovsky,
	Juergen Gross, Stefano Stabellini, Greg Kroah-Hartman, open list

Using RING_GET_RESPONSE() on a shared ring is easy to use incorrectly
(i.e., by not considering that the other end may alter the data in the
shared ring while it is being inspected).  Safe usage of a response
generally requires taking a local copy.

Provide a RING_COPY_RESPONSE() macro to use instead of
RING_GET_RESPONSE() and an open-coded memcpy().  This takes care of
ensuring that the copy is done correctly regardless of any possible
compiler optimizations.

Use a volatile source to prevent the compiler from reordering or
omitting the copy.

This is complementary to XSA155.

CC: stable@vger.kernel.org
Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 include/xen/interface/io/ring.h | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/include/xen/interface/io/ring.h b/include/xen/interface/io/ring.h
index 3f40501..03702f6 100644
--- a/include/xen/interface/io/ring.h
+++ b/include/xen/interface/io/ring.h
@@ -201,6 +201,20 @@ struct __name##_back_ring {						\
 #define RING_GET_RESPONSE(_r, _idx)					\
     (&((_r)->sring->ring[((_idx) & (RING_SIZE(_r) - 1))].rsp))
 
+/*
+ * Get a local copy of a response.
+ *
+ * Use this in preference to RING_GET_RESPONSE() so all processing is
+ * done on a local copy that cannot be modified by the other end.
+ *
+ * Note that https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145 may cause this
+ * to be ineffective where _rsp is a struct which consists of only bitfields.
+ */
+#define RING_COPY_RESPONSE(_r, _idx, _rsp) do {				\
+	/* Use volatile to force the copy into _rsp. */			\
+	*(_rsp) = *(volatile typeof(_rsp))RING_GET_RESPONSE(_r, _idx);	\
+} while (0)
+
 /* Loop termination condition: Would the specified index overflow the ring? */
 #define RING_REQUEST_CONS_OVERFLOW(_r, _cons)				\
     (((_cons) - (_r)->rsp_prod_pvt) >= RING_SIZE(_r))
-- 
git-series 0.9.1

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 1/6] xen: Add RING_COPY_RESPONSE()
  2018-04-30 21:01 [PATCH 0/6] Fix XSA-155-like bugs in frontend drivers Marek Marczykowski-Górecki
  2018-04-30 21:01 ` [PATCH 1/6] xen: Add RING_COPY_RESPONSE() Marek Marczykowski-Górecki
@ 2018-04-30 21:01 ` Marek Marczykowski-Górecki
  2018-04-30 21:01   ` Marek Marczykowski-Górecki
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 31+ messages in thread
From: Marek Marczykowski-Górecki @ 2018-04-30 21:01 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Stefano Stabellini, Greg Kroah-Hartman,
	Marek Marczykowski-Górecki, stable, open list,
	Boris Ostrovsky

Using RING_GET_RESPONSE() on a shared ring is easy to use incorrectly
(i.e., by not considering that the other end may alter the data in the
shared ring while it is being inspected).  Safe usage of a response
generally requires taking a local copy.

Provide a RING_COPY_RESPONSE() macro to use instead of
RING_GET_RESPONSE() and an open-coded memcpy().  This takes care of
ensuring that the copy is done correctly regardless of any possible
compiler optimizations.

Use a volatile source to prevent the compiler from reordering or
omitting the copy.

This is complementary to XSA155.

CC: stable@vger.kernel.org
Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 include/xen/interface/io/ring.h | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/include/xen/interface/io/ring.h b/include/xen/interface/io/ring.h
index 3f40501..03702f6 100644
--- a/include/xen/interface/io/ring.h
+++ b/include/xen/interface/io/ring.h
@@ -201,6 +201,20 @@ struct __name##_back_ring {						\
 #define RING_GET_RESPONSE(_r, _idx)					\
     (&((_r)->sring->ring[((_idx) & (RING_SIZE(_r) - 1))].rsp))
 
+/*
+ * Get a local copy of a response.
+ *
+ * Use this in preference to RING_GET_RESPONSE() so all processing is
+ * done on a local copy that cannot be modified by the other end.
+ *
+ * Note that https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145 may cause this
+ * to be ineffective where _rsp is a struct which consists of only bitfields.
+ */
+#define RING_COPY_RESPONSE(_r, _idx, _rsp) do {				\
+	/* Use volatile to force the copy into _rsp. */			\
+	*(_rsp) = *(volatile typeof(_rsp))RING_GET_RESPONSE(_r, _idx);	\
+} while (0)
+
 /* Loop termination condition: Would the specified index overflow the ring? */
 #define RING_REQUEST_CONS_OVERFLOW(_r, _cons)				\
     (((_cons) - (_r)->rsp_prod_pvt) >= RING_SIZE(_r))
-- 
git-series 0.9.1

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 2/6] xen-netfront: copy response out of shared buffer before accessing it
  2018-04-30 21:01 [PATCH 0/6] Fix XSA-155-like bugs in frontend drivers Marek Marczykowski-Górecki
@ 2018-04-30 21:01   ` Marek Marczykowski-Górecki
  2018-04-30 21:01 ` Marek Marczykowski-Górecki
                     ` (8 subsequent siblings)
  9 siblings, 0 replies; 31+ messages in thread
From: Marek Marczykowski-Górecki @ 2018-04-30 21:01 UTC (permalink / raw)
  To: xen-devel
  Cc: Marek Marczykowski-Górecki, stable, Boris Ostrovsky,
	Juergen Gross, open list:NETWORKING DRIVERS, open list

Make local copy of the response, otherwise backend might modify it while
frontend is already processing it - leading to time of check / time of
use issue.

This is complementary to XSA155.

Cc: stable@vger.kernel.org
Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 drivers/net/xen-netfront.c | 51 +++++++++++++++++++--------------------
 1 file changed, 25 insertions(+), 26 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 4dd0668..dc99763 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -387,13 +387,13 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue)
 		rmb(); /* Ensure we see responses up to 'rp'. */
 
 		for (cons = queue->tx.rsp_cons; cons != prod; cons++) {
-			struct xen_netif_tx_response *txrsp;
+			struct xen_netif_tx_response txrsp;
 
-			txrsp = RING_GET_RESPONSE(&queue->tx, cons);
-			if (txrsp->status == XEN_NETIF_RSP_NULL)
+			RING_COPY_RESPONSE(&queue->tx, cons, &txrsp);
+			if (txrsp.status == XEN_NETIF_RSP_NULL)
 				continue;
 
-			id  = txrsp->id;
+			id  = txrsp.id;
 			skb = queue->tx_skbs[id].skb;
 			if (unlikely(gnttab_query_foreign_access(
 				queue->grant_tx_ref[id]) != 0)) {
@@ -741,7 +741,7 @@ static int xennet_get_extras(struct netfront_queue *queue,
 			     RING_IDX rp)
 
 {
-	struct xen_netif_extra_info *extra;
+	struct xen_netif_extra_info extra;
 	struct device *dev = &queue->info->netdev->dev;
 	RING_IDX cons = queue->rx.rsp_cons;
 	int err = 0;
@@ -757,24 +757,23 @@ static int xennet_get_extras(struct netfront_queue *queue,
 			break;
 		}
 
-		extra = (struct xen_netif_extra_info *)
-			RING_GET_RESPONSE(&queue->rx, ++cons);
+		RING_COPY_RESPONSE(&queue->rx, ++cons, &extra);
 
-		if (unlikely(!extra->type ||
-			     extra->type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
+		if (unlikely(!extra.type ||
+			     extra.type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
 			if (net_ratelimit())
 				dev_warn(dev, "Invalid extra type: %d\n",
-					extra->type);
+					extra.type);
 			err = -EINVAL;
 		} else {
-			memcpy(&extras[extra->type - 1], extra,
-			       sizeof(*extra));
+			memcpy(&extras[extra.type - 1], &extra,
+			       sizeof(extra));
 		}
 
 		skb = xennet_get_rx_skb(queue, cons);
 		ref = xennet_get_rx_ref(queue, cons);
 		xennet_move_rx_slot(queue, skb, ref);
-	} while (extra->flags & XEN_NETIF_EXTRA_FLAG_MORE);
+	} while (extra.flags & XEN_NETIF_EXTRA_FLAG_MORE);
 
 	queue->rx.rsp_cons = cons;
 	return err;
@@ -784,28 +783,28 @@ static int xennet_get_responses(struct netfront_queue *queue,
 				struct netfront_rx_info *rinfo, RING_IDX rp,
 				struct sk_buff_head *list)
 {
-	struct xen_netif_rx_response *rx = &rinfo->rx;
+	struct xen_netif_rx_response rx = rinfo->rx;
 	struct xen_netif_extra_info *extras = rinfo->extras;
 	struct device *dev = &queue->info->netdev->dev;
 	RING_IDX cons = queue->rx.rsp_cons;
 	struct sk_buff *skb = xennet_get_rx_skb(queue, cons);
 	grant_ref_t ref = xennet_get_rx_ref(queue, cons);
-	int max = MAX_SKB_FRAGS + (rx->status <= RX_COPY_THRESHOLD);
+	int max = MAX_SKB_FRAGS + (rx.status <= RX_COPY_THRESHOLD);
 	int slots = 1;
 	int err = 0;
 	unsigned long ret;
 
-	if (rx->flags & XEN_NETRXF_extra_info) {
+	if (rx.flags & XEN_NETRXF_extra_info) {
 		err = xennet_get_extras(queue, extras, rp);
 		cons = queue->rx.rsp_cons;
 	}
 
 	for (;;) {
-		if (unlikely(rx->status < 0 ||
-			     rx->offset + rx->status > XEN_PAGE_SIZE)) {
+		if (unlikely(rx.status < 0 ||
+			     rx.offset + rx.status > XEN_PAGE_SIZE)) {
 			if (net_ratelimit())
 				dev_warn(dev, "rx->offset: %u, size: %d\n",
-					 rx->offset, rx->status);
+					 rx.offset, rx.status);
 			xennet_move_rx_slot(queue, skb, ref);
 			err = -EINVAL;
 			goto next;
@@ -819,7 +818,7 @@ static int xennet_get_responses(struct netfront_queue *queue,
 		if (ref == GRANT_INVALID_REF) {
 			if (net_ratelimit())
 				dev_warn(dev, "Bad rx response id %d.\n",
-					 rx->id);
+					 rx.id);
 			err = -EINVAL;
 			goto next;
 		}
@@ -832,7 +831,7 @@ static int xennet_get_responses(struct netfront_queue *queue,
 		__skb_queue_tail(list, skb);
 
 next:
-		if (!(rx->flags & XEN_NETRXF_more_data))
+		if (!(rx.flags & XEN_NETRXF_more_data))
 			break;
 
 		if (cons + slots == rp) {
@@ -842,7 +841,7 @@ static int xennet_get_responses(struct netfront_queue *queue,
 			break;
 		}
 
-		rx = RING_GET_RESPONSE(&queue->rx, cons + slots);
+		RING_COPY_RESPONSE(&queue->rx, cons + slots, &rx);
 		skb = xennet_get_rx_skb(queue, cons + slots);
 		ref = xennet_get_rx_ref(queue, cons + slots);
 		slots++;
@@ -898,9 +897,9 @@ static RING_IDX xennet_fill_frags(struct netfront_queue *queue,
 	struct sk_buff *nskb;
 
 	while ((nskb = __skb_dequeue(list))) {
-		struct xen_netif_rx_response *rx =
-			RING_GET_RESPONSE(&queue->rx, ++cons);
+		struct xen_netif_rx_response rx;
 		skb_frag_t *nfrag = &skb_shinfo(nskb)->frags[0];
+		RING_COPY_RESPONSE(&queue->rx, ++cons, &rx);
 
 		if (shinfo->nr_frags == MAX_SKB_FRAGS) {
 			unsigned int pull_to = NETFRONT_SKB_CB(skb)->pull_to;
@@ -911,7 +910,7 @@ static RING_IDX xennet_fill_frags(struct netfront_queue *queue,
 		BUG_ON(shinfo->nr_frags >= MAX_SKB_FRAGS);
 
 		skb_add_rx_frag(skb, shinfo->nr_frags, skb_frag_page(nfrag),
-				rx->offset, rx->status, PAGE_SIZE);
+				rx.offset, rx.status, PAGE_SIZE);
 
 		skb_shinfo(nskb)->nr_frags = 0;
 		kfree_skb(nskb);
@@ -1007,7 +1006,7 @@ static int xennet_poll(struct napi_struct *napi, int budget)
 	i = queue->rx.rsp_cons;
 	work_done = 0;
 	while ((i != rp) && (work_done < budget)) {
-		memcpy(rx, RING_GET_RESPONSE(&queue->rx, i), sizeof(*rx));
+		RING_COPY_RESPONSE(&queue->rx, i, rx);
 		memset(extras, 0, sizeof(rinfo.extras));
 
 		err = xennet_get_responses(queue, &rinfo, rp, &tmpq);
-- 
git-series 0.9.1

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 2/6] xen-netfront: copy response out of shared buffer before accessing it
@ 2018-04-30 21:01   ` Marek Marczykowski-Górecki
  0 siblings, 0 replies; 31+ messages in thread
From: Marek Marczykowski-Górecki @ 2018-04-30 21:01 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, open list:NETWORKING DRIVERS,
	Marek Marczykowski-Górecki, stable, open list,
	Boris Ostrovsky

Make local copy of the response, otherwise backend might modify it while
frontend is already processing it - leading to time of check / time of
use issue.

This is complementary to XSA155.

Cc: stable@vger.kernel.org
Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 drivers/net/xen-netfront.c | 51 +++++++++++++++++++--------------------
 1 file changed, 25 insertions(+), 26 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 4dd0668..dc99763 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -387,13 +387,13 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue)
 		rmb(); /* Ensure we see responses up to 'rp'. */
 
 		for (cons = queue->tx.rsp_cons; cons != prod; cons++) {
-			struct xen_netif_tx_response *txrsp;
+			struct xen_netif_tx_response txrsp;
 
-			txrsp = RING_GET_RESPONSE(&queue->tx, cons);
-			if (txrsp->status == XEN_NETIF_RSP_NULL)
+			RING_COPY_RESPONSE(&queue->tx, cons, &txrsp);
+			if (txrsp.status == XEN_NETIF_RSP_NULL)
 				continue;
 
-			id  = txrsp->id;
+			id  = txrsp.id;
 			skb = queue->tx_skbs[id].skb;
 			if (unlikely(gnttab_query_foreign_access(
 				queue->grant_tx_ref[id]) != 0)) {
@@ -741,7 +741,7 @@ static int xennet_get_extras(struct netfront_queue *queue,
 			     RING_IDX rp)
 
 {
-	struct xen_netif_extra_info *extra;
+	struct xen_netif_extra_info extra;
 	struct device *dev = &queue->info->netdev->dev;
 	RING_IDX cons = queue->rx.rsp_cons;
 	int err = 0;
@@ -757,24 +757,23 @@ static int xennet_get_extras(struct netfront_queue *queue,
 			break;
 		}
 
-		extra = (struct xen_netif_extra_info *)
-			RING_GET_RESPONSE(&queue->rx, ++cons);
+		RING_COPY_RESPONSE(&queue->rx, ++cons, &extra);
 
-		if (unlikely(!extra->type ||
-			     extra->type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
+		if (unlikely(!extra.type ||
+			     extra.type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
 			if (net_ratelimit())
 				dev_warn(dev, "Invalid extra type: %d\n",
-					extra->type);
+					extra.type);
 			err = -EINVAL;
 		} else {
-			memcpy(&extras[extra->type - 1], extra,
-			       sizeof(*extra));
+			memcpy(&extras[extra.type - 1], &extra,
+			       sizeof(extra));
 		}
 
 		skb = xennet_get_rx_skb(queue, cons);
 		ref = xennet_get_rx_ref(queue, cons);
 		xennet_move_rx_slot(queue, skb, ref);
-	} while (extra->flags & XEN_NETIF_EXTRA_FLAG_MORE);
+	} while (extra.flags & XEN_NETIF_EXTRA_FLAG_MORE);
 
 	queue->rx.rsp_cons = cons;
 	return err;
@@ -784,28 +783,28 @@ static int xennet_get_responses(struct netfront_queue *queue,
 				struct netfront_rx_info *rinfo, RING_IDX rp,
 				struct sk_buff_head *list)
 {
-	struct xen_netif_rx_response *rx = &rinfo->rx;
+	struct xen_netif_rx_response rx = rinfo->rx;
 	struct xen_netif_extra_info *extras = rinfo->extras;
 	struct device *dev = &queue->info->netdev->dev;
 	RING_IDX cons = queue->rx.rsp_cons;
 	struct sk_buff *skb = xennet_get_rx_skb(queue, cons);
 	grant_ref_t ref = xennet_get_rx_ref(queue, cons);
-	int max = MAX_SKB_FRAGS + (rx->status <= RX_COPY_THRESHOLD);
+	int max = MAX_SKB_FRAGS + (rx.status <= RX_COPY_THRESHOLD);
 	int slots = 1;
 	int err = 0;
 	unsigned long ret;
 
-	if (rx->flags & XEN_NETRXF_extra_info) {
+	if (rx.flags & XEN_NETRXF_extra_info) {
 		err = xennet_get_extras(queue, extras, rp);
 		cons = queue->rx.rsp_cons;
 	}
 
 	for (;;) {
-		if (unlikely(rx->status < 0 ||
-			     rx->offset + rx->status > XEN_PAGE_SIZE)) {
+		if (unlikely(rx.status < 0 ||
+			     rx.offset + rx.status > XEN_PAGE_SIZE)) {
 			if (net_ratelimit())
 				dev_warn(dev, "rx->offset: %u, size: %d\n",
-					 rx->offset, rx->status);
+					 rx.offset, rx.status);
 			xennet_move_rx_slot(queue, skb, ref);
 			err = -EINVAL;
 			goto next;
@@ -819,7 +818,7 @@ static int xennet_get_responses(struct netfront_queue *queue,
 		if (ref == GRANT_INVALID_REF) {
 			if (net_ratelimit())
 				dev_warn(dev, "Bad rx response id %d.\n",
-					 rx->id);
+					 rx.id);
 			err = -EINVAL;
 			goto next;
 		}
@@ -832,7 +831,7 @@ static int xennet_get_responses(struct netfront_queue *queue,
 		__skb_queue_tail(list, skb);
 
 next:
-		if (!(rx->flags & XEN_NETRXF_more_data))
+		if (!(rx.flags & XEN_NETRXF_more_data))
 			break;
 
 		if (cons + slots == rp) {
@@ -842,7 +841,7 @@ static int xennet_get_responses(struct netfront_queue *queue,
 			break;
 		}
 
-		rx = RING_GET_RESPONSE(&queue->rx, cons + slots);
+		RING_COPY_RESPONSE(&queue->rx, cons + slots, &rx);
 		skb = xennet_get_rx_skb(queue, cons + slots);
 		ref = xennet_get_rx_ref(queue, cons + slots);
 		slots++;
@@ -898,9 +897,9 @@ static RING_IDX xennet_fill_frags(struct netfront_queue *queue,
 	struct sk_buff *nskb;
 
 	while ((nskb = __skb_dequeue(list))) {
-		struct xen_netif_rx_response *rx =
-			RING_GET_RESPONSE(&queue->rx, ++cons);
+		struct xen_netif_rx_response rx;
 		skb_frag_t *nfrag = &skb_shinfo(nskb)->frags[0];
+		RING_COPY_RESPONSE(&queue->rx, ++cons, &rx);
 
 		if (shinfo->nr_frags == MAX_SKB_FRAGS) {
 			unsigned int pull_to = NETFRONT_SKB_CB(skb)->pull_to;
@@ -911,7 +910,7 @@ static RING_IDX xennet_fill_frags(struct netfront_queue *queue,
 		BUG_ON(shinfo->nr_frags >= MAX_SKB_FRAGS);
 
 		skb_add_rx_frag(skb, shinfo->nr_frags, skb_frag_page(nfrag),
-				rx->offset, rx->status, PAGE_SIZE);
+				rx.offset, rx.status, PAGE_SIZE);
 
 		skb_shinfo(nskb)->nr_frags = 0;
 		kfree_skb(nskb);
@@ -1007,7 +1006,7 @@ static int xennet_poll(struct napi_struct *napi, int budget)
 	i = queue->rx.rsp_cons;
 	work_done = 0;
 	while ((i != rp) && (work_done < budget)) {
-		memcpy(rx, RING_GET_RESPONSE(&queue->rx, i), sizeof(*rx));
+		RING_COPY_RESPONSE(&queue->rx, i, rx);
 		memset(extras, 0, sizeof(rinfo.extras));
 
 		err = xennet_get_responses(queue, &rinfo, rp, &tmpq);
-- 
git-series 0.9.1

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 3/6] xen-netfront: do not use data already exposed to backend
  2018-04-30 21:01 [PATCH 0/6] Fix XSA-155-like bugs in frontend drivers Marek Marczykowski-Górecki
@ 2018-04-30 21:01   ` Marek Marczykowski-Górecki
  2018-04-30 21:01 ` Marek Marczykowski-Górecki
                     ` (8 subsequent siblings)
  9 siblings, 0 replies; 31+ messages in thread
From: Marek Marczykowski-Górecki @ 2018-04-30 21:01 UTC (permalink / raw)
  To: xen-devel
  Cc: Marek Marczykowski-Górecki, stable, Boris Ostrovsky,
	Juergen Gross, open list:NETWORKING DRIVERS, open list

Backend may freely modify anything on shared page, so use data which was
supposed to be written there, instead of reading it back from the shared
page.

This is complementary to XSA155.

CC: stable@vger.kernel.org
Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 drivers/net/xen-netfront.c |  9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index dc99763..934b8a4 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -458,7 +458,7 @@ static void xennet_tx_setup_grant(unsigned long gfn, unsigned int offset,
 	tx->flags = 0;
 
 	info->tx = tx;
-	info->size += tx->size;
+	info->size += len;
 }
 
 static struct xen_netif_tx_request *xennet_make_first_txreq(
@@ -574,7 +574,7 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	int slots;
 	struct page *page;
 	unsigned int offset;
-	unsigned int len;
+	unsigned int len, this_len;
 	unsigned long flags;
 	struct netfront_queue *queue = NULL;
 	unsigned int num_queues = dev->real_num_tx_queues;
@@ -634,14 +634,15 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	}
 
 	/* First request for the linear area. */
+	this_len = min_t(unsigned int, XEN_PAGE_SIZE - offset, len);
 	first_tx = tx = xennet_make_first_txreq(queue, skb,
 						page, offset, len);
-	offset += tx->size;
+	offset += this_len;
 	if (offset == PAGE_SIZE) {
 		page++;
 		offset = 0;
 	}
-	len -= tx->size;
+	len -= this_len;
 
 	if (skb->ip_summed == CHECKSUM_PARTIAL)
 		/* local packet? */
-- 
git-series 0.9.1

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 3/6] xen-netfront: do not use data already exposed to backend
@ 2018-04-30 21:01   ` Marek Marczykowski-Górecki
  0 siblings, 0 replies; 31+ messages in thread
From: Marek Marczykowski-Górecki @ 2018-04-30 21:01 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, open list:NETWORKING DRIVERS,
	Marek Marczykowski-Górecki, stable, open list,
	Boris Ostrovsky

Backend may freely modify anything on shared page, so use data which was
supposed to be written there, instead of reading it back from the shared
page.

This is complementary to XSA155.

CC: stable@vger.kernel.org
Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 drivers/net/xen-netfront.c |  9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index dc99763..934b8a4 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -458,7 +458,7 @@ static void xennet_tx_setup_grant(unsigned long gfn, unsigned int offset,
 	tx->flags = 0;
 
 	info->tx = tx;
-	info->size += tx->size;
+	info->size += len;
 }
 
 static struct xen_netif_tx_request *xennet_make_first_txreq(
@@ -574,7 +574,7 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	int slots;
 	struct page *page;
 	unsigned int offset;
-	unsigned int len;
+	unsigned int len, this_len;
 	unsigned long flags;
 	struct netfront_queue *queue = NULL;
 	unsigned int num_queues = dev->real_num_tx_queues;
@@ -634,14 +634,15 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	}
 
 	/* First request for the linear area. */
+	this_len = min_t(unsigned int, XEN_PAGE_SIZE - offset, len);
 	first_tx = tx = xennet_make_first_txreq(queue, skb,
 						page, offset, len);
-	offset += tx->size;
+	offset += this_len;
 	if (offset == PAGE_SIZE) {
 		page++;
 		offset = 0;
 	}
-	len -= tx->size;
+	len -= this_len;
 
 	if (skb->ip_summed == CHECKSUM_PARTIAL)
 		/* local packet? */
-- 
git-series 0.9.1

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 4/6] xen-netfront: add range check for Tx response id
  2018-04-30 21:01 [PATCH 0/6] Fix XSA-155-like bugs in frontend drivers Marek Marczykowski-Górecki
                   ` (3 preceding siblings ...)
  2018-04-30 21:01   ` Marek Marczykowski-Górecki
@ 2018-04-30 21:01 ` Marek Marczykowski-Górecki
  2018-05-01 10:05   ` Wei Liu
  2018-05-01 10:05     ` Wei Liu
  2018-04-30 21:01 ` Marek Marczykowski-Górecki
                   ` (4 subsequent siblings)
  9 siblings, 2 replies; 31+ messages in thread
From: Marek Marczykowski-Górecki @ 2018-04-30 21:01 UTC (permalink / raw)
  To: xen-devel
  Cc: Marek Marczykowski-Górecki, stable, Boris Ostrovsky,
	Juergen Gross, open list:NETWORKING DRIVERS, open list

Tx response ID is fetched from shared page, so make sure it is sane
before using it as an array index.

CC: stable@vger.kernel.org
Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 drivers/net/xen-netfront.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 934b8a4..55c9b25 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -394,6 +394,7 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue)
 				continue;
 
 			id  = txrsp.id;
+			BUG_ON(id >= NET_TX_RING_SIZE);
 			skb = queue->tx_skbs[id].skb;
 			if (unlikely(gnttab_query_foreign_access(
 				queue->grant_tx_ref[id]) != 0)) {
-- 
git-series 0.9.1

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 4/6] xen-netfront: add range check for Tx response id
  2018-04-30 21:01 [PATCH 0/6] Fix XSA-155-like bugs in frontend drivers Marek Marczykowski-Górecki
                   ` (4 preceding siblings ...)
  2018-04-30 21:01 ` [PATCH 4/6] xen-netfront: add range check for Tx response id Marek Marczykowski-Górecki
@ 2018-04-30 21:01 ` Marek Marczykowski-Górecki
  2018-04-30 21:01   ` Marek Marczykowski-Górecki
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 31+ messages in thread
From: Marek Marczykowski-Górecki @ 2018-04-30 21:01 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, open list:NETWORKING DRIVERS,
	Marek Marczykowski-Górecki, stable, open list,
	Boris Ostrovsky

Tx response ID is fetched from shared page, so make sure it is sane
before using it as an array index.

CC: stable@vger.kernel.org
Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 drivers/net/xen-netfront.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 934b8a4..55c9b25 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -394,6 +394,7 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue)
 				continue;
 
 			id  = txrsp.id;
+			BUG_ON(id >= NET_TX_RING_SIZE);
 			skb = queue->tx_skbs[id].skb;
 			if (unlikely(gnttab_query_foreign_access(
 				queue->grant_tx_ref[id]) != 0)) {
-- 
git-series 0.9.1

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 5/6] xen-blkfront: make local copy of response before using it
  2018-04-30 21:01 [PATCH 0/6] Fix XSA-155-like bugs in frontend drivers Marek Marczykowski-Górecki
@ 2018-04-30 21:01   ` Marek Marczykowski-Górecki
  2018-04-30 21:01 ` Marek Marczykowski-Górecki
                     ` (8 subsequent siblings)
  9 siblings, 0 replies; 31+ messages in thread
From: Marek Marczykowski-Górecki @ 2018-04-30 21:01 UTC (permalink / raw)
  To: xen-devel
  Cc: Marek Marczykowski-Górecki, stable, Konrad Rzeszutek Wilk,
	Roger Pau Monné,
	Boris Ostrovsky, Juergen Gross, Jens Axboe,
	open list:BLOCK LAYER, open list

Data on the shared page can be changed at any time by the backend. Make
a local copy, which is no longer controlled by the backend. And only
then access it.

This is complementary to XSA155.

CC: stable@vger.kernel.org
Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 drivers/block/xen-blkfront.c | 34 +++++++++++++++++-----------------
 1 file changed, 17 insertions(+), 17 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 2a8e781..3926811 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -1549,7 +1549,7 @@ static bool blkif_completion(unsigned long *id,
 static irqreturn_t blkif_interrupt(int irq, void *dev_id)
 {
 	struct request *req;
-	struct blkif_response *bret;
+	struct blkif_response bret;
 	RING_IDX i, rp;
 	unsigned long flags;
 	struct blkfront_ring_info *rinfo = (struct blkfront_ring_info *)dev_id;
@@ -1566,8 +1566,8 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
 	for (i = rinfo->ring.rsp_cons; i != rp; i++) {
 		unsigned long id;
 
-		bret = RING_GET_RESPONSE(&rinfo->ring, i);
-		id   = bret->id;
+		RING_COPY_RESPONSE(&rinfo->ring, i, &bret);
+		id   = bret.id;
 		/*
 		 * The backend has messed up and given us an id that we would
 		 * never have given to it (we stamp it up to BLK_RING_SIZE -
@@ -1575,39 +1575,39 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
 		 */
 		if (id >= BLK_RING_SIZE(info)) {
 			WARN(1, "%s: response to %s has incorrect id (%ld)\n",
-			     info->gd->disk_name, op_name(bret->operation), id);
+			     info->gd->disk_name, op_name(bret.operation), id);
 			/* We can't safely get the 'struct request' as
 			 * the id is busted. */
 			continue;
 		}
 		req  = rinfo->shadow[id].request;
 
-		if (bret->operation != BLKIF_OP_DISCARD) {
+		if (bret.operation != BLKIF_OP_DISCARD) {
 			/*
 			 * We may need to wait for an extra response if the
 			 * I/O request is split in 2
 			 */
-			if (!blkif_completion(&id, rinfo, bret))
+			if (!blkif_completion(&id, rinfo, &bret))
 				continue;
 		}
 
 		if (add_id_to_freelist(rinfo, id)) {
 			WARN(1, "%s: response to %s (id %ld) couldn't be recycled!\n",
-			     info->gd->disk_name, op_name(bret->operation), id);
+			     info->gd->disk_name, op_name(bret.operation), id);
 			continue;
 		}
 
-		if (bret->status == BLKIF_RSP_OKAY)
+		if (bret.status == BLKIF_RSP_OKAY)
 			blkif_req(req)->error = BLK_STS_OK;
 		else
 			blkif_req(req)->error = BLK_STS_IOERR;
 
-		switch (bret->operation) {
+		switch (bret.operation) {
 		case BLKIF_OP_DISCARD:
-			if (unlikely(bret->status == BLKIF_RSP_EOPNOTSUPP)) {
+			if (unlikely(bret.status == BLKIF_RSP_EOPNOTSUPP)) {
 				struct request_queue *rq = info->rq;
 				printk(KERN_WARNING "blkfront: %s: %s op failed\n",
-					   info->gd->disk_name, op_name(bret->operation));
+					   info->gd->disk_name, op_name(bret.operation));
 				blkif_req(req)->error = BLK_STS_NOTSUPP;
 				info->feature_discard = 0;
 				info->feature_secdiscard = 0;
@@ -1617,15 +1617,15 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
 			break;
 		case BLKIF_OP_FLUSH_DISKCACHE:
 		case BLKIF_OP_WRITE_BARRIER:
-			if (unlikely(bret->status == BLKIF_RSP_EOPNOTSUPP)) {
+			if (unlikely(bret.status == BLKIF_RSP_EOPNOTSUPP)) {
 				printk(KERN_WARNING "blkfront: %s: %s op failed\n",
-				       info->gd->disk_name, op_name(bret->operation));
+				       info->gd->disk_name, op_name(bret.operation));
 				blkif_req(req)->error = BLK_STS_NOTSUPP;
 			}
-			if (unlikely(bret->status == BLKIF_RSP_ERROR &&
+			if (unlikely(bret.status == BLKIF_RSP_ERROR &&
 				     rinfo->shadow[id].req.u.rw.nr_segments == 0)) {
 				printk(KERN_WARNING "blkfront: %s: empty %s op failed\n",
-				       info->gd->disk_name, op_name(bret->operation));
+				       info->gd->disk_name, op_name(bret.operation));
 				blkif_req(req)->error = BLK_STS_NOTSUPP;
 			}
 			if (unlikely(blkif_req(req)->error)) {
@@ -1638,9 +1638,9 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
 			/* fall through */
 		case BLKIF_OP_READ:
 		case BLKIF_OP_WRITE:
-			if (unlikely(bret->status != BLKIF_RSP_OKAY))
+			if (unlikely(bret.status != BLKIF_RSP_OKAY))
 				dev_dbg(&info->xbdev->dev, "Bad return from blkdev data "
-					"request: %x\n", bret->status);
+					"request: %x\n", bret.status);
 
 			break;
 		default:
-- 
git-series 0.9.1

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 5/6] xen-blkfront: make local copy of response before using it
@ 2018-04-30 21:01   ` Marek Marczykowski-Górecki
  0 siblings, 0 replies; 31+ messages in thread
From: Marek Marczykowski-Górecki @ 2018-04-30 21:01 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Jens Axboe, Marek Marczykowski-Górecki,
	stable, open list, open list:BLOCK LAYER, Boris Ostrovsky,
	Roger Pau Monné

Data on the shared page can be changed at any time by the backend. Make
a local copy, which is no longer controlled by the backend. And only
then access it.

This is complementary to XSA155.

CC: stable@vger.kernel.org
Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 drivers/block/xen-blkfront.c | 34 +++++++++++++++++-----------------
 1 file changed, 17 insertions(+), 17 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 2a8e781..3926811 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -1549,7 +1549,7 @@ static bool blkif_completion(unsigned long *id,
 static irqreturn_t blkif_interrupt(int irq, void *dev_id)
 {
 	struct request *req;
-	struct blkif_response *bret;
+	struct blkif_response bret;
 	RING_IDX i, rp;
 	unsigned long flags;
 	struct blkfront_ring_info *rinfo = (struct blkfront_ring_info *)dev_id;
@@ -1566,8 +1566,8 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
 	for (i = rinfo->ring.rsp_cons; i != rp; i++) {
 		unsigned long id;
 
-		bret = RING_GET_RESPONSE(&rinfo->ring, i);
-		id   = bret->id;
+		RING_COPY_RESPONSE(&rinfo->ring, i, &bret);
+		id   = bret.id;
 		/*
 		 * The backend has messed up and given us an id that we would
 		 * never have given to it (we stamp it up to BLK_RING_SIZE -
@@ -1575,39 +1575,39 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
 		 */
 		if (id >= BLK_RING_SIZE(info)) {
 			WARN(1, "%s: response to %s has incorrect id (%ld)\n",
-			     info->gd->disk_name, op_name(bret->operation), id);
+			     info->gd->disk_name, op_name(bret.operation), id);
 			/* We can't safely get the 'struct request' as
 			 * the id is busted. */
 			continue;
 		}
 		req  = rinfo->shadow[id].request;
 
-		if (bret->operation != BLKIF_OP_DISCARD) {
+		if (bret.operation != BLKIF_OP_DISCARD) {
 			/*
 			 * We may need to wait for an extra response if the
 			 * I/O request is split in 2
 			 */
-			if (!blkif_completion(&id, rinfo, bret))
+			if (!blkif_completion(&id, rinfo, &bret))
 				continue;
 		}
 
 		if (add_id_to_freelist(rinfo, id)) {
 			WARN(1, "%s: response to %s (id %ld) couldn't be recycled!\n",
-			     info->gd->disk_name, op_name(bret->operation), id);
+			     info->gd->disk_name, op_name(bret.operation), id);
 			continue;
 		}
 
-		if (bret->status == BLKIF_RSP_OKAY)
+		if (bret.status == BLKIF_RSP_OKAY)
 			blkif_req(req)->error = BLK_STS_OK;
 		else
 			blkif_req(req)->error = BLK_STS_IOERR;
 
-		switch (bret->operation) {
+		switch (bret.operation) {
 		case BLKIF_OP_DISCARD:
-			if (unlikely(bret->status == BLKIF_RSP_EOPNOTSUPP)) {
+			if (unlikely(bret.status == BLKIF_RSP_EOPNOTSUPP)) {
 				struct request_queue *rq = info->rq;
 				printk(KERN_WARNING "blkfront: %s: %s op failed\n",
-					   info->gd->disk_name, op_name(bret->operation));
+					   info->gd->disk_name, op_name(bret.operation));
 				blkif_req(req)->error = BLK_STS_NOTSUPP;
 				info->feature_discard = 0;
 				info->feature_secdiscard = 0;
@@ -1617,15 +1617,15 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
 			break;
 		case BLKIF_OP_FLUSH_DISKCACHE:
 		case BLKIF_OP_WRITE_BARRIER:
-			if (unlikely(bret->status == BLKIF_RSP_EOPNOTSUPP)) {
+			if (unlikely(bret.status == BLKIF_RSP_EOPNOTSUPP)) {
 				printk(KERN_WARNING "blkfront: %s: %s op failed\n",
-				       info->gd->disk_name, op_name(bret->operation));
+				       info->gd->disk_name, op_name(bret.operation));
 				blkif_req(req)->error = BLK_STS_NOTSUPP;
 			}
-			if (unlikely(bret->status == BLKIF_RSP_ERROR &&
+			if (unlikely(bret.status == BLKIF_RSP_ERROR &&
 				     rinfo->shadow[id].req.u.rw.nr_segments == 0)) {
 				printk(KERN_WARNING "blkfront: %s: empty %s op failed\n",
-				       info->gd->disk_name, op_name(bret->operation));
+				       info->gd->disk_name, op_name(bret.operation));
 				blkif_req(req)->error = BLK_STS_NOTSUPP;
 			}
 			if (unlikely(blkif_req(req)->error)) {
@@ -1638,9 +1638,9 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
 			/* fall through */
 		case BLKIF_OP_READ:
 		case BLKIF_OP_WRITE:
-			if (unlikely(bret->status != BLKIF_RSP_OKAY))
+			if (unlikely(bret.status != BLKIF_RSP_OKAY))
 				dev_dbg(&info->xbdev->dev, "Bad return from blkdev data "
-					"request: %x\n", bret->status);
+					"request: %x\n", bret.status);
 
 			break;
 		default:
-- 
git-series 0.9.1

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 6/6] xen-blkfront: prepare request locally, only then put it on the shared ring
  2018-04-30 21:01 [PATCH 0/6] Fix XSA-155-like bugs in frontend drivers Marek Marczykowski-Górecki
                   ` (7 preceding siblings ...)
  2018-04-30 21:01 ` [PATCH 6/6] xen-blkfront: prepare request locally, only then put it on the shared ring Marek Marczykowski-Górecki
@ 2018-04-30 21:01 ` Marek Marczykowski-Górecki
  2018-05-01  8:22   ` Roger Pau Monné
  2018-05-01  8:22     ` Roger Pau Monné
  2018-05-01 10:12 ` [PATCH 0/6] Fix XSA-155-like bugs in frontend drivers Wei Liu
  9 siblings, 2 replies; 31+ messages in thread
From: Marek Marczykowski-Górecki @ 2018-04-30 21:01 UTC (permalink / raw)
  To: xen-devel
  Cc: Marek Marczykowski-Górecki, stable, Konrad Rzeszutek Wilk,
	Roger Pau Monné,
	Boris Ostrovsky, Juergen Gross, Jens Axboe,
	open list:BLOCK LAYER, open list

Do not reuse data which theoretically might be already modified by the
backend. This is mostly about private copy of the request
(info->shadow[id].req) - make sure the request saved there is really the
one just filled.

This is complementary to XSA155.

CC: stable@vger.kernel.org
Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 drivers/block/xen-blkfront.c | 76 +++++++++++++++++++++----------------
 1 file changed, 44 insertions(+), 32 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 3926811..b100b55 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -525,19 +525,16 @@ static int blkif_ioctl(struct block_device *bdev, fmode_t mode,
 
 static unsigned long blkif_ring_get_request(struct blkfront_ring_info *rinfo,
 					    struct request *req,
-					    struct blkif_request **ring_req)
+					    struct blkif_request *ring_req)
 {
 	unsigned long id;
 
-	*ring_req = RING_GET_REQUEST(&rinfo->ring, rinfo->ring.req_prod_pvt);
-	rinfo->ring.req_prod_pvt++;
-
 	id = get_id_from_freelist(rinfo);
 	rinfo->shadow[id].request = req;
 	rinfo->shadow[id].status = REQ_WAITING;
 	rinfo->shadow[id].associated_id = NO_ASSOCIATED_ID;
 
-	(*ring_req)->u.rw.id = id;
+	ring_req->u.rw.id = id;
 
 	return id;
 }
@@ -545,23 +542,28 @@ static unsigned long blkif_ring_get_request(struct blkfront_ring_info *rinfo,
 static int blkif_queue_discard_req(struct request *req, struct blkfront_ring_info *rinfo)
 {
 	struct blkfront_info *info = rinfo->dev_info;
-	struct blkif_request *ring_req;
+	struct blkif_request ring_req = { 0 };
 	unsigned long id;
 
 	/* Fill out a communications ring structure. */
 	id = blkif_ring_get_request(rinfo, req, &ring_req);
 
-	ring_req->operation = BLKIF_OP_DISCARD;
-	ring_req->u.discard.nr_sectors = blk_rq_sectors(req);
-	ring_req->u.discard.id = id;
-	ring_req->u.discard.sector_number = (blkif_sector_t)blk_rq_pos(req);
+	ring_req.operation = BLKIF_OP_DISCARD;
+	ring_req.u.discard.nr_sectors = blk_rq_sectors(req);
+	ring_req.u.discard.id = id;
+	ring_req.u.discard.sector_number = (blkif_sector_t)blk_rq_pos(req);
 	if (req_op(req) == REQ_OP_SECURE_ERASE && info->feature_secdiscard)
-		ring_req->u.discard.flag = BLKIF_DISCARD_SECURE;
+		ring_req.u.discard.flag = BLKIF_DISCARD_SECURE;
 	else
-		ring_req->u.discard.flag = 0;
+		ring_req.u.discard.flag = 0;
+
+	/* make the request available to the backend */
+	*RING_GET_REQUEST(&rinfo->ring, rinfo->ring.req_prod_pvt) = ring_req;
+	wmb();
+	rinfo->ring.req_prod_pvt++;
 
 	/* Keep a private copy so we can reissue requests when recovering. */
-	rinfo->shadow[id].req = *ring_req;
+	rinfo->shadow[id].req = ring_req;
 
 	return 0;
 }
@@ -693,7 +695,7 @@ static void blkif_setup_extra_req(struct blkif_request *first,
 static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *rinfo)
 {
 	struct blkfront_info *info = rinfo->dev_info;
-	struct blkif_request *ring_req, *extra_ring_req = NULL;
+	struct blkif_request ring_req = { 0 }, extra_ring_req = { 0 };
 	unsigned long id, extra_id = NO_ASSOCIATED_ID;
 	bool require_extra_req = false;
 	int i;
@@ -758,16 +760,16 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
 		 * BLKIF_OP_WRITE
 		 */
 		BUG_ON(req_op(req) == REQ_OP_FLUSH || req->cmd_flags & REQ_FUA);
-		ring_req->operation = BLKIF_OP_INDIRECT;
-		ring_req->u.indirect.indirect_op = rq_data_dir(req) ?
+		ring_req.operation = BLKIF_OP_INDIRECT;
+		ring_req.u.indirect.indirect_op = rq_data_dir(req) ?
 			BLKIF_OP_WRITE : BLKIF_OP_READ;
-		ring_req->u.indirect.sector_number = (blkif_sector_t)blk_rq_pos(req);
-		ring_req->u.indirect.handle = info->handle;
-		ring_req->u.indirect.nr_segments = num_grant;
+		ring_req.u.indirect.sector_number = (blkif_sector_t)blk_rq_pos(req);
+		ring_req.u.indirect.handle = info->handle;
+		ring_req.u.indirect.nr_segments = num_grant;
 	} else {
-		ring_req->u.rw.sector_number = (blkif_sector_t)blk_rq_pos(req);
-		ring_req->u.rw.handle = info->handle;
-		ring_req->operation = rq_data_dir(req) ?
+		ring_req.u.rw.sector_number = (blkif_sector_t)blk_rq_pos(req);
+		ring_req.u.rw.handle = info->handle;
+		ring_req.operation = rq_data_dir(req) ?
 			BLKIF_OP_WRITE : BLKIF_OP_READ;
 		if (req_op(req) == REQ_OP_FLUSH || req->cmd_flags & REQ_FUA) {
 			/*
@@ -778,15 +780,15 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
 			 * since it is guaranteed ordered WRT previous writes.)
 			 */
 			if (info->feature_flush && info->feature_fua)
-				ring_req->operation =
+				ring_req.operation =
 					BLKIF_OP_WRITE_BARRIER;
 			else if (info->feature_flush)
-				ring_req->operation =
+				ring_req.operation =
 					BLKIF_OP_FLUSH_DISKCACHE;
 			else
-				ring_req->operation = 0;
+				ring_req.operation = 0;
 		}
-		ring_req->u.rw.nr_segments = num_grant;
+		ring_req.u.rw.nr_segments = num_grant;
 		if (unlikely(require_extra_req)) {
 			extra_id = blkif_ring_get_request(rinfo, req,
 							  &extra_ring_req);
@@ -796,7 +798,7 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
 			 */
 			rinfo->shadow[extra_id].num_sg = 0;
 
-			blkif_setup_extra_req(ring_req, extra_ring_req);
+			blkif_setup_extra_req(&ring_req, &extra_ring_req);
 
 			/* Link the 2 requests together */
 			rinfo->shadow[extra_id].associated_id = id;
@@ -804,12 +806,12 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
 		}
 	}
 
-	setup.ring_req = ring_req;
+	setup.ring_req = &ring_req;
 	setup.id = id;
 
 	setup.require_extra_req = require_extra_req;
 	if (unlikely(require_extra_req))
-		setup.extra_ring_req = extra_ring_req;
+		setup.extra_ring_req = &extra_ring_req;
 
 	for_each_sg(rinfo->shadow[id].sg, sg, num_sg, i) {
 		BUG_ON(sg->offset + sg->length > PAGE_SIZE);
@@ -831,10 +833,20 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
 	if (setup.segments)
 		kunmap_atomic(setup.segments);
 
+	/* make the request available to the backend */
+	*RING_GET_REQUEST(&rinfo->ring, rinfo->ring.req_prod_pvt) = ring_req;
+	wmb();
+	rinfo->ring.req_prod_pvt++;
 	/* Keep a private copy so we can reissue requests when recovering. */
-	rinfo->shadow[id].req = *ring_req;
-	if (unlikely(require_extra_req))
-		rinfo->shadow[extra_id].req = *extra_ring_req;
+	rinfo->shadow[id].req = ring_req;
+
+	if (unlikely(require_extra_req)) {
+		*RING_GET_REQUEST(&rinfo->ring, rinfo->ring.req_prod_pvt) = extra_ring_req;
+		wmb();
+		rinfo->ring.req_prod_pvt++;
+		/* Keep a private copy so we can reissue requests when recovering. */
+		rinfo->shadow[extra_id].req = extra_ring_req;
+	}
 
 	if (new_persistent_gnts)
 		gnttab_free_grant_references(setup.gref_head);
-- 
git-series 0.9.1

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 6/6] xen-blkfront: prepare request locally, only then put it on the shared ring
  2018-04-30 21:01 [PATCH 0/6] Fix XSA-155-like bugs in frontend drivers Marek Marczykowski-Górecki
                   ` (6 preceding siblings ...)
  2018-04-30 21:01   ` Marek Marczykowski-Górecki
@ 2018-04-30 21:01 ` Marek Marczykowski-Górecki
  2018-04-30 21:01 ` Marek Marczykowski-Górecki
  2018-05-01 10:12 ` [PATCH 0/6] Fix XSA-155-like bugs in frontend drivers Wei Liu
  9 siblings, 0 replies; 31+ messages in thread
From: Marek Marczykowski-Górecki @ 2018-04-30 21:01 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Jens Axboe, Marek Marczykowski-Górecki,
	stable, open list, open list:BLOCK LAYER, Boris Ostrovsky,
	Roger Pau Monné

Do not reuse data which theoretically might be already modified by the
backend. This is mostly about private copy of the request
(info->shadow[id].req) - make sure the request saved there is really the
one just filled.

This is complementary to XSA155.

CC: stable@vger.kernel.org
Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 drivers/block/xen-blkfront.c | 76 +++++++++++++++++++++----------------
 1 file changed, 44 insertions(+), 32 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 3926811..b100b55 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -525,19 +525,16 @@ static int blkif_ioctl(struct block_device *bdev, fmode_t mode,
 
 static unsigned long blkif_ring_get_request(struct blkfront_ring_info *rinfo,
 					    struct request *req,
-					    struct blkif_request **ring_req)
+					    struct blkif_request *ring_req)
 {
 	unsigned long id;
 
-	*ring_req = RING_GET_REQUEST(&rinfo->ring, rinfo->ring.req_prod_pvt);
-	rinfo->ring.req_prod_pvt++;
-
 	id = get_id_from_freelist(rinfo);
 	rinfo->shadow[id].request = req;
 	rinfo->shadow[id].status = REQ_WAITING;
 	rinfo->shadow[id].associated_id = NO_ASSOCIATED_ID;
 
-	(*ring_req)->u.rw.id = id;
+	ring_req->u.rw.id = id;
 
 	return id;
 }
@@ -545,23 +542,28 @@ static unsigned long blkif_ring_get_request(struct blkfront_ring_info *rinfo,
 static int blkif_queue_discard_req(struct request *req, struct blkfront_ring_info *rinfo)
 {
 	struct blkfront_info *info = rinfo->dev_info;
-	struct blkif_request *ring_req;
+	struct blkif_request ring_req = { 0 };
 	unsigned long id;
 
 	/* Fill out a communications ring structure. */
 	id = blkif_ring_get_request(rinfo, req, &ring_req);
 
-	ring_req->operation = BLKIF_OP_DISCARD;
-	ring_req->u.discard.nr_sectors = blk_rq_sectors(req);
-	ring_req->u.discard.id = id;
-	ring_req->u.discard.sector_number = (blkif_sector_t)blk_rq_pos(req);
+	ring_req.operation = BLKIF_OP_DISCARD;
+	ring_req.u.discard.nr_sectors = blk_rq_sectors(req);
+	ring_req.u.discard.id = id;
+	ring_req.u.discard.sector_number = (blkif_sector_t)blk_rq_pos(req);
 	if (req_op(req) == REQ_OP_SECURE_ERASE && info->feature_secdiscard)
-		ring_req->u.discard.flag = BLKIF_DISCARD_SECURE;
+		ring_req.u.discard.flag = BLKIF_DISCARD_SECURE;
 	else
-		ring_req->u.discard.flag = 0;
+		ring_req.u.discard.flag = 0;
+
+	/* make the request available to the backend */
+	*RING_GET_REQUEST(&rinfo->ring, rinfo->ring.req_prod_pvt) = ring_req;
+	wmb();
+	rinfo->ring.req_prod_pvt++;
 
 	/* Keep a private copy so we can reissue requests when recovering. */
-	rinfo->shadow[id].req = *ring_req;
+	rinfo->shadow[id].req = ring_req;
 
 	return 0;
 }
@@ -693,7 +695,7 @@ static void blkif_setup_extra_req(struct blkif_request *first,
 static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *rinfo)
 {
 	struct blkfront_info *info = rinfo->dev_info;
-	struct blkif_request *ring_req, *extra_ring_req = NULL;
+	struct blkif_request ring_req = { 0 }, extra_ring_req = { 0 };
 	unsigned long id, extra_id = NO_ASSOCIATED_ID;
 	bool require_extra_req = false;
 	int i;
@@ -758,16 +760,16 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
 		 * BLKIF_OP_WRITE
 		 */
 		BUG_ON(req_op(req) == REQ_OP_FLUSH || req->cmd_flags & REQ_FUA);
-		ring_req->operation = BLKIF_OP_INDIRECT;
-		ring_req->u.indirect.indirect_op = rq_data_dir(req) ?
+		ring_req.operation = BLKIF_OP_INDIRECT;
+		ring_req.u.indirect.indirect_op = rq_data_dir(req) ?
 			BLKIF_OP_WRITE : BLKIF_OP_READ;
-		ring_req->u.indirect.sector_number = (blkif_sector_t)blk_rq_pos(req);
-		ring_req->u.indirect.handle = info->handle;
-		ring_req->u.indirect.nr_segments = num_grant;
+		ring_req.u.indirect.sector_number = (blkif_sector_t)blk_rq_pos(req);
+		ring_req.u.indirect.handle = info->handle;
+		ring_req.u.indirect.nr_segments = num_grant;
 	} else {
-		ring_req->u.rw.sector_number = (blkif_sector_t)blk_rq_pos(req);
-		ring_req->u.rw.handle = info->handle;
-		ring_req->operation = rq_data_dir(req) ?
+		ring_req.u.rw.sector_number = (blkif_sector_t)blk_rq_pos(req);
+		ring_req.u.rw.handle = info->handle;
+		ring_req.operation = rq_data_dir(req) ?
 			BLKIF_OP_WRITE : BLKIF_OP_READ;
 		if (req_op(req) == REQ_OP_FLUSH || req->cmd_flags & REQ_FUA) {
 			/*
@@ -778,15 +780,15 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
 			 * since it is guaranteed ordered WRT previous writes.)
 			 */
 			if (info->feature_flush && info->feature_fua)
-				ring_req->operation =
+				ring_req.operation =
 					BLKIF_OP_WRITE_BARRIER;
 			else if (info->feature_flush)
-				ring_req->operation =
+				ring_req.operation =
 					BLKIF_OP_FLUSH_DISKCACHE;
 			else
-				ring_req->operation = 0;
+				ring_req.operation = 0;
 		}
-		ring_req->u.rw.nr_segments = num_grant;
+		ring_req.u.rw.nr_segments = num_grant;
 		if (unlikely(require_extra_req)) {
 			extra_id = blkif_ring_get_request(rinfo, req,
 							  &extra_ring_req);
@@ -796,7 +798,7 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
 			 */
 			rinfo->shadow[extra_id].num_sg = 0;
 
-			blkif_setup_extra_req(ring_req, extra_ring_req);
+			blkif_setup_extra_req(&ring_req, &extra_ring_req);
 
 			/* Link the 2 requests together */
 			rinfo->shadow[extra_id].associated_id = id;
@@ -804,12 +806,12 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
 		}
 	}
 
-	setup.ring_req = ring_req;
+	setup.ring_req = &ring_req;
 	setup.id = id;
 
 	setup.require_extra_req = require_extra_req;
 	if (unlikely(require_extra_req))
-		setup.extra_ring_req = extra_ring_req;
+		setup.extra_ring_req = &extra_ring_req;
 
 	for_each_sg(rinfo->shadow[id].sg, sg, num_sg, i) {
 		BUG_ON(sg->offset + sg->length > PAGE_SIZE);
@@ -831,10 +833,20 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
 	if (setup.segments)
 		kunmap_atomic(setup.segments);
 
+	/* make the request available to the backend */
+	*RING_GET_REQUEST(&rinfo->ring, rinfo->ring.req_prod_pvt) = ring_req;
+	wmb();
+	rinfo->ring.req_prod_pvt++;
 	/* Keep a private copy so we can reissue requests when recovering. */
-	rinfo->shadow[id].req = *ring_req;
-	if (unlikely(require_extra_req))
-		rinfo->shadow[extra_id].req = *extra_ring_req;
+	rinfo->shadow[id].req = ring_req;
+
+	if (unlikely(require_extra_req)) {
+		*RING_GET_REQUEST(&rinfo->ring, rinfo->ring.req_prod_pvt) = extra_ring_req;
+		wmb();
+		rinfo->ring.req_prod_pvt++;
+		/* Keep a private copy so we can reissue requests when recovering. */
+		rinfo->shadow[extra_id].req = extra_ring_req;
+	}
 
 	if (new_persistent_gnts)
 		gnttab_free_grant_references(setup.gref_head);
-- 
git-series 0.9.1

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [PATCH 1/6] xen: Add RING_COPY_RESPONSE()
  2018-04-30 21:01 ` [PATCH 1/6] xen: Add RING_COPY_RESPONSE() Marek Marczykowski-Górecki
  2018-04-30 21:25   ` Boris Ostrovsky
@ 2018-04-30 21:25   ` Boris Ostrovsky
  2018-04-30 21:27     ` Marek Marczykowski-Górecki
  2018-04-30 21:27     ` Marek Marczykowski-Górecki
  1 sibling, 2 replies; 31+ messages in thread
From: Boris Ostrovsky @ 2018-04-30 21:25 UTC (permalink / raw)
  To: Marek Marczykowski-Górecki, xen-devel
  Cc: stable, Juergen Gross, Stefano Stabellini, Greg Kroah-Hartman, open list

On 04/30/2018 05:01 PM, Marek Marczykowski-Górecki wrote:
> Using RING_GET_RESPONSE() on a shared ring is easy to use incorrectly
> (i.e., by not considering that the other end may alter the data in the
> shared ring while it is being inspected).  Safe usage of a response
> generally requires taking a local copy.
>
> Provide a RING_COPY_RESPONSE() macro to use instead of
> RING_GET_RESPONSE() and an open-coded memcpy().  This takes care of
> ensuring that the copy is done correctly regardless of any possible
> compiler optimizations.
>
> Use a volatile source to prevent the compiler from reordering or
> omitting the copy.
>
> This is complementary to XSA155.
>
> CC: stable@vger.kernel.org
> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
> ---
>  include/xen/interface/io/ring.h | 14 ++++++++++++++
>  1 file changed, 14 insertions(+)
>
> diff --git a/include/xen/interface/io/ring.h b/include/xen/interface/io/ring.h
> index 3f40501..03702f6 100644
> --- a/include/xen/interface/io/ring.h
> +++ b/include/xen/interface/io/ring.h
> @@ -201,6 +201,20 @@ struct __name##_back_ring {						\
>  #define RING_GET_RESPONSE(_r, _idx)					\
>      (&((_r)->sring->ring[((_idx) & (RING_SIZE(_r) - 1))].rsp))
>  
> +/*
> + * Get a local copy of a response.
> + *
> + * Use this in preference to RING_GET_RESPONSE() so all processing is
> + * done on a local copy that cannot be modified by the other end.
> + *
> + * Note that https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145 may cause this
> + * to be ineffective where _rsp is a struct which consists of only bitfields.
> + */
> +#define RING_COPY_RESPONSE(_r, _idx, _rsp) do {				\
> +	/* Use volatile to force the copy into _rsp. */			\
> +	*(_rsp) = *(volatile typeof(_rsp))RING_GET_RESPONSE(_r, _idx);	\
> +} while (0)
> +

To avoid repeating essentially the same comment, can you move
RING_COPY_RESPONSE definition next to RING_COPY_REQUEST and adjust the
existing comment? And probably move RING_GET_RESPONSE next to
RING_GET_REQUEST? In other words

#define RING_GET_REQUEST
#define RING_GET_RESPONSE

/* comment */
#define RING_COPY_REQUEST
#define RING_COPY_RESPONSE


Also, perhaps the two can be collapsed together, along the lines of

#define RING_COPY_(action, _r, _idx, _msg) do {                          \
        /* Use volatile to force the copy into _msg. */                 \
        *(_msg) = *(volatile typeof(_msg))RING_GET_##action(_r, _idx);   \
} while (0)

#define RING_COPY_REQUEST(_r, _idx, _req)  RING_COPY_(REQUEST, _r, _idx,
_req)
#define RING_COPY_RESPONSE(_r, _idx, _rsp)  RING_COPY_(RESPONSE, _r,
_idx, _rsp)


(I have not tried to compile this so it may well be wrong)

-boris



>  /* Loop termination condition: Would the specified index overflow the ring? */
>  #define RING_REQUEST_CONS_OVERFLOW(_r, _cons)				\
>      (((_cons) - (_r)->rsp_prod_pvt) >= RING_SIZE(_r))

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 1/6] xen: Add RING_COPY_RESPONSE()
  2018-04-30 21:01 ` [PATCH 1/6] xen: Add RING_COPY_RESPONSE() Marek Marczykowski-Górecki
@ 2018-04-30 21:25   ` Boris Ostrovsky
  2018-04-30 21:25   ` Boris Ostrovsky
  1 sibling, 0 replies; 31+ messages in thread
From: Boris Ostrovsky @ 2018-04-30 21:25 UTC (permalink / raw)
  To: Marek Marczykowski-Górecki, xen-devel
  Cc: Juergen Gross, Greg Kroah-Hartman, Stefano Stabellini, open list, stable

On 04/30/2018 05:01 PM, Marek Marczykowski-Górecki wrote:
> Using RING_GET_RESPONSE() on a shared ring is easy to use incorrectly
> (i.e., by not considering that the other end may alter the data in the
> shared ring while it is being inspected).  Safe usage of a response
> generally requires taking a local copy.
>
> Provide a RING_COPY_RESPONSE() macro to use instead of
> RING_GET_RESPONSE() and an open-coded memcpy().  This takes care of
> ensuring that the copy is done correctly regardless of any possible
> compiler optimizations.
>
> Use a volatile source to prevent the compiler from reordering or
> omitting the copy.
>
> This is complementary to XSA155.
>
> CC: stable@vger.kernel.org
> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
> ---
>  include/xen/interface/io/ring.h | 14 ++++++++++++++
>  1 file changed, 14 insertions(+)
>
> diff --git a/include/xen/interface/io/ring.h b/include/xen/interface/io/ring.h
> index 3f40501..03702f6 100644
> --- a/include/xen/interface/io/ring.h
> +++ b/include/xen/interface/io/ring.h
> @@ -201,6 +201,20 @@ struct __name##_back_ring {						\
>  #define RING_GET_RESPONSE(_r, _idx)					\
>      (&((_r)->sring->ring[((_idx) & (RING_SIZE(_r) - 1))].rsp))
>  
> +/*
> + * Get a local copy of a response.
> + *
> + * Use this in preference to RING_GET_RESPONSE() so all processing is
> + * done on a local copy that cannot be modified by the other end.
> + *
> + * Note that https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145 may cause this
> + * to be ineffective where _rsp is a struct which consists of only bitfields.
> + */
> +#define RING_COPY_RESPONSE(_r, _idx, _rsp) do {				\
> +	/* Use volatile to force the copy into _rsp. */			\
> +	*(_rsp) = *(volatile typeof(_rsp))RING_GET_RESPONSE(_r, _idx);	\
> +} while (0)
> +

To avoid repeating essentially the same comment, can you move
RING_COPY_RESPONSE definition next to RING_COPY_REQUEST and adjust the
existing comment? And probably move RING_GET_RESPONSE next to
RING_GET_REQUEST? In other words

#define RING_GET_REQUEST
#define RING_GET_RESPONSE

/* comment */
#define RING_COPY_REQUEST
#define RING_COPY_RESPONSE


Also, perhaps the two can be collapsed together, along the lines of

#define RING_COPY_(action, _r, _idx, _msg) do {                          \
        /* Use volatile to force the copy into _msg. */                 \
        *(_msg) = *(volatile typeof(_msg))RING_GET_##action(_r, _idx);   \
} while (0)

#define RING_COPY_REQUEST(_r, _idx, _req)  RING_COPY_(REQUEST, _r, _idx,
_req)
#define RING_COPY_RESPONSE(_r, _idx, _rsp)  RING_COPY_(RESPONSE, _r,
_idx, _rsp)


(I have not tried to compile this so it may well be wrong)

-boris



>  /* Loop termination condition: Would the specified index overflow the ring? */
>  #define RING_REQUEST_CONS_OVERFLOW(_r, _cons)				\
>      (((_cons) - (_r)->rsp_prod_pvt) >= RING_SIZE(_r))


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 1/6] xen: Add RING_COPY_RESPONSE()
  2018-04-30 21:25   ` Boris Ostrovsky
@ 2018-04-30 21:27     ` Marek Marczykowski-Górecki
  2018-04-30 21:41       ` Boris Ostrovsky
  2018-04-30 21:41       ` Boris Ostrovsky
  2018-04-30 21:27     ` Marek Marczykowski-Górecki
  1 sibling, 2 replies; 31+ messages in thread
From: Marek Marczykowski-Górecki @ 2018-04-30 21:27 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: xen-devel, stable, Juergen Gross, Stefano Stabellini,
	Greg Kroah-Hartman, open list

[-- Attachment #1: Type: text/plain, Size: 960 bytes --]

On Mon, Apr 30, 2018 at 05:25:52PM -0400, Boris Ostrovsky wrote:
> Also, perhaps the two can be collapsed together, along the lines of
> 
> #define RING_COPY_(action, _r, _idx, _msg) do {                          \
>         /* Use volatile to force the copy into _msg. */                 \
>         *(_msg) = *(volatile typeof(_msg))RING_GET_##action(_r, _idx);   \
> } while (0)
> 
> #define RING_COPY_REQUEST(_r, _idx, _req)  RING_COPY_(REQUEST, _r, _idx,
> _req)
> #define RING_COPY_RESPONSE(_r, _idx, _rsp)  RING_COPY_(RESPONSE, _r,
> _idx, _rsp)
> 
> 
> (I have not tried to compile this so it may well be wrong)

It works, thanks :)
I'll wait with v2 until I get feedback on other patches.

-- 
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 1/6] xen: Add RING_COPY_RESPONSE()
  2018-04-30 21:25   ` Boris Ostrovsky
  2018-04-30 21:27     ` Marek Marczykowski-Górecki
@ 2018-04-30 21:27     ` Marek Marczykowski-Górecki
  1 sibling, 0 replies; 31+ messages in thread
From: Marek Marczykowski-Górecki @ 2018-04-30 21:27 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: Juergen Gross, Stefano Stabellini, Greg Kroah-Hartman, open list,
	stable, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 960 bytes --]

On Mon, Apr 30, 2018 at 05:25:52PM -0400, Boris Ostrovsky wrote:
> Also, perhaps the two can be collapsed together, along the lines of
> 
> #define RING_COPY_(action, _r, _idx, _msg) do {                          \
>         /* Use volatile to force the copy into _msg. */                 \
>         *(_msg) = *(volatile typeof(_msg))RING_GET_##action(_r, _idx);   \
> } while (0)
> 
> #define RING_COPY_REQUEST(_r, _idx, _req)  RING_COPY_(REQUEST, _r, _idx,
> _req)
> #define RING_COPY_RESPONSE(_r, _idx, _rsp)  RING_COPY_(RESPONSE, _r,
> _idx, _rsp)
> 
> 
> (I have not tried to compile this so it may well be wrong)

It works, thanks :)
I'll wait with v2 until I get feedback on other patches.

-- 
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 1/6] xen: Add RING_COPY_RESPONSE()
  2018-04-30 21:27     ` Marek Marczykowski-Górecki
  2018-04-30 21:41       ` Boris Ostrovsky
@ 2018-04-30 21:41       ` Boris Ostrovsky
  1 sibling, 0 replies; 31+ messages in thread
From: Boris Ostrovsky @ 2018-04-30 21:41 UTC (permalink / raw)
  To: Marek Marczykowski-Górecki
  Cc: xen-devel, stable, Juergen Gross, Stefano Stabellini,
	Greg Kroah-Hartman, open list


[-- Attachment #1.1: Type: text/plain, Size: 998 bytes --]

On 04/30/2018 05:27 PM, Marek Marczykowski-Górecki wrote:
> On Mon, Apr 30, 2018 at 05:25:52PM -0400, Boris Ostrovsky wrote:
>> Also, perhaps the two can be collapsed together, along the lines of
>>
>> #define RING_COPY_(action, _r, _idx, _msg) do {                          \
>>         /* Use volatile to force the copy into _msg. */                 \
>>         *(_msg) = *(volatile typeof(_msg))RING_GET_##action(_r, _idx);   \
>> } while (0)
>>
>> #define RING_COPY_REQUEST(_r, _idx, _req)  RING_COPY_(REQUEST, _r, _idx,
>> _req)
>> #define RING_COPY_RESPONSE(_r, _idx, _rsp)  RING_COPY_(RESPONSE, _r,
>> _idx, _rsp)
>>
>>
>> (I have not tried to compile this so it may well be wrong)
> It works, thanks :)
> I'll wait with v2 until I get feedback on other patches.
>


Oh, and one more thing --- the canonical version of this file lives in
Xen (include/public/io/ring.h) so it needs first to be accepted by Xen
maintainers.

-boris

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 1/6] xen: Add RING_COPY_RESPONSE()
  2018-04-30 21:27     ` Marek Marczykowski-Górecki
@ 2018-04-30 21:41       ` Boris Ostrovsky
  2018-04-30 21:41       ` Boris Ostrovsky
  1 sibling, 0 replies; 31+ messages in thread
From: Boris Ostrovsky @ 2018-04-30 21:41 UTC (permalink / raw)
  To: Marek Marczykowski-Górecki
  Cc: Juergen Gross, Stefano Stabellini, Greg Kroah-Hartman, open list,
	stable, xen-devel


[-- Attachment #1.1.1: Type: text/plain, Size: 998 bytes --]

On 04/30/2018 05:27 PM, Marek Marczykowski-Górecki wrote:
> On Mon, Apr 30, 2018 at 05:25:52PM -0400, Boris Ostrovsky wrote:
>> Also, perhaps the two can be collapsed together, along the lines of
>>
>> #define RING_COPY_(action, _r, _idx, _msg) do {                          \
>>         /* Use volatile to force the copy into _msg. */                 \
>>         *(_msg) = *(volatile typeof(_msg))RING_GET_##action(_r, _idx);   \
>> } while (0)
>>
>> #define RING_COPY_REQUEST(_r, _idx, _req)  RING_COPY_(REQUEST, _r, _idx,
>> _req)
>> #define RING_COPY_RESPONSE(_r, _idx, _rsp)  RING_COPY_(RESPONSE, _r,
>> _idx, _rsp)
>>
>>
>> (I have not tried to compile this so it may well be wrong)
> It works, thanks :)
> I'll wait with v2 until I get feedback on other patches.
>


Oh, and one more thing --- the canonical version of this file lives in
Xen (include/public/io/ring.h) so it needs first to be accepted by Xen
maintainers.

-boris

[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 6/6] xen-blkfront: prepare request locally, only then put it on the shared ring
  2018-04-30 21:01 ` Marek Marczykowski-Górecki
@ 2018-05-01  8:22     ` Roger Pau Monné
  2018-05-01  8:22     ` Roger Pau Monné
  1 sibling, 0 replies; 31+ messages in thread
From: Roger Pau Monné @ 2018-05-01  8:22 UTC (permalink / raw)
  To: Marek Marczykowski-Górecki
  Cc: xen-devel, stable, Konrad Rzeszutek Wilk, Boris Ostrovsky,
	Juergen Gross, Jens Axboe, open list:BLOCK LAYER, open list

On Mon, Apr 30, 2018 at 11:01:50PM +0200, Marek Marczykowski-G�recki wrote:
> Do not reuse data which theoretically might be already modified by the
> backend. This is mostly about private copy of the request
> (info->shadow[id].req) - make sure the request saved there is really the
> one just filled.
>
> This is complementary to XSA155.
> 
> CC: stable@vger.kernel.org
> Signed-off-by: Marek Marczykowski-G�recki <marmarek@invisiblethingslab.com>
> ---
>  drivers/block/xen-blkfront.c | 76 +++++++++++++++++++++----------------
>  1 file changed, 44 insertions(+), 32 deletions(-)
> 
> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> index 3926811..b100b55 100644
> --- a/drivers/block/xen-blkfront.c
> +++ b/drivers/block/xen-blkfront.c
> @@ -525,19 +525,16 @@ static int blkif_ioctl(struct block_device *bdev, fmode_t mode,
>  
>  static unsigned long blkif_ring_get_request(struct blkfront_ring_info *rinfo,

The name of this function should be changed IMO, since you are no
longer getting a request from the ring, but just initializing a
request struct.

>  					    struct request *req,
> -					    struct blkif_request **ring_req)
> +					    struct blkif_request *ring_req)
>  {
>  	unsigned long id;
>  
> -	*ring_req = RING_GET_REQUEST(&rinfo->ring, rinfo->ring.req_prod_pvt);
> -	rinfo->ring.req_prod_pvt++;
> -
>  	id = get_id_from_freelist(rinfo);
>  	rinfo->shadow[id].request = req;
>  	rinfo->shadow[id].status = REQ_WAITING;
>  	rinfo->shadow[id].associated_id = NO_ASSOCIATED_ID;
>  
> -	(*ring_req)->u.rw.id = id;
> +	ring_req->u.rw.id = id;
>  
>  	return id;
>  }
> @@ -545,23 +542,28 @@ static unsigned long blkif_ring_get_request(struct blkfront_ring_info *rinfo,
>  static int blkif_queue_discard_req(struct request *req, struct blkfront_ring_info *rinfo)
>  {
>  	struct blkfront_info *info = rinfo->dev_info;
> -	struct blkif_request *ring_req;
> +	struct blkif_request ring_req = { 0 };
>  	unsigned long id;
>  
>  	/* Fill out a communications ring structure. */
>  	id = blkif_ring_get_request(rinfo, req, &ring_req);

Maybe I'm missing something obvious here, but you are adding a struct
allocated on the stack to the shadow ring copy, isn't this dangerous?

The pointer stored in the shadow ring copy is going to be invalid
after returning from this function.

The same comment applies to the other calls to blkif_ring_get_request
below that pass a ring_reg allocated on the stack.

Thanks, Roger.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 6/6] xen-blkfront: prepare request locally, only then put it on the shared ring
@ 2018-05-01  8:22     ` Roger Pau Monné
  0 siblings, 0 replies; 31+ messages in thread
From: Roger Pau Monné @ 2018-05-01  8:22 UTC (permalink / raw)
  To: Marek Marczykowski-Górecki
  Cc: xen-devel, stable, Konrad Rzeszutek Wilk, Boris Ostrovsky,
	Juergen Gross, Jens Axboe, open list:BLOCK LAYER, open list

On Mon, Apr 30, 2018 at 11:01:50PM +0200, Marek Marczykowski-Górecki wrote:
> Do not reuse data which theoretically might be already modified by the
> backend. This is mostly about private copy of the request
> (info->shadow[id].req) - make sure the request saved there is really the
> one just filled.
>
> This is complementary to XSA155.
> 
> CC: stable@vger.kernel.org
> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
> ---
>  drivers/block/xen-blkfront.c | 76 +++++++++++++++++++++----------------
>  1 file changed, 44 insertions(+), 32 deletions(-)
> 
> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> index 3926811..b100b55 100644
> --- a/drivers/block/xen-blkfront.c
> +++ b/drivers/block/xen-blkfront.c
> @@ -525,19 +525,16 @@ static int blkif_ioctl(struct block_device *bdev, fmode_t mode,
>  
>  static unsigned long blkif_ring_get_request(struct blkfront_ring_info *rinfo,

The name of this function should be changed IMO, since you are no
longer getting a request from the ring, but just initializing a
request struct.

>  					    struct request *req,
> -					    struct blkif_request **ring_req)
> +					    struct blkif_request *ring_req)
>  {
>  	unsigned long id;
>  
> -	*ring_req = RING_GET_REQUEST(&rinfo->ring, rinfo->ring.req_prod_pvt);
> -	rinfo->ring.req_prod_pvt++;
> -
>  	id = get_id_from_freelist(rinfo);
>  	rinfo->shadow[id].request = req;
>  	rinfo->shadow[id].status = REQ_WAITING;
>  	rinfo->shadow[id].associated_id = NO_ASSOCIATED_ID;
>  
> -	(*ring_req)->u.rw.id = id;
> +	ring_req->u.rw.id = id;
>  
>  	return id;
>  }
> @@ -545,23 +542,28 @@ static unsigned long blkif_ring_get_request(struct blkfront_ring_info *rinfo,
>  static int blkif_queue_discard_req(struct request *req, struct blkfront_ring_info *rinfo)
>  {
>  	struct blkfront_info *info = rinfo->dev_info;
> -	struct blkif_request *ring_req;
> +	struct blkif_request ring_req = { 0 };
>  	unsigned long id;
>  
>  	/* Fill out a communications ring structure. */
>  	id = blkif_ring_get_request(rinfo, req, &ring_req);

Maybe I'm missing something obvious here, but you are adding a struct
allocated on the stack to the shadow ring copy, isn't this dangerous?

The pointer stored in the shadow ring copy is going to be invalid
after returning from this function.

The same comment applies to the other calls to blkif_ring_get_request
below that pass a ring_reg allocated on the stack.

Thanks, Roger.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 6/6] xen-blkfront: prepare request locally, only then put it on the shared ring
  2018-04-30 21:01 ` Marek Marczykowski-Górecki
@ 2018-05-01  8:22   ` Roger Pau Monné
  2018-05-01  8:22     ` Roger Pau Monné
  1 sibling, 0 replies; 31+ messages in thread
From: Roger Pau Monné @ 2018-05-01  8:22 UTC (permalink / raw)
  To: Marek Marczykowski-Górecki
  Cc: Juergen Gross, Jens Axboe, open list, stable,
	open list:BLOCK LAYER, xen-devel, Boris Ostrovsky

On Mon, Apr 30, 2018 at 11:01:50PM +0200, Marek Marczykowski-Górecki wrote:
> Do not reuse data which theoretically might be already modified by the
> backend. This is mostly about private copy of the request
> (info->shadow[id].req) - make sure the request saved there is really the
> one just filled.
>
> This is complementary to XSA155.
> 
> CC: stable@vger.kernel.org
> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
> ---
>  drivers/block/xen-blkfront.c | 76 +++++++++++++++++++++----------------
>  1 file changed, 44 insertions(+), 32 deletions(-)
> 
> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> index 3926811..b100b55 100644
> --- a/drivers/block/xen-blkfront.c
> +++ b/drivers/block/xen-blkfront.c
> @@ -525,19 +525,16 @@ static int blkif_ioctl(struct block_device *bdev, fmode_t mode,
>  
>  static unsigned long blkif_ring_get_request(struct blkfront_ring_info *rinfo,

The name of this function should be changed IMO, since you are no
longer getting a request from the ring, but just initializing a
request struct.

>  					    struct request *req,
> -					    struct blkif_request **ring_req)
> +					    struct blkif_request *ring_req)
>  {
>  	unsigned long id;
>  
> -	*ring_req = RING_GET_REQUEST(&rinfo->ring, rinfo->ring.req_prod_pvt);
> -	rinfo->ring.req_prod_pvt++;
> -
>  	id = get_id_from_freelist(rinfo);
>  	rinfo->shadow[id].request = req;
>  	rinfo->shadow[id].status = REQ_WAITING;
>  	rinfo->shadow[id].associated_id = NO_ASSOCIATED_ID;
>  
> -	(*ring_req)->u.rw.id = id;
> +	ring_req->u.rw.id = id;
>  
>  	return id;
>  }
> @@ -545,23 +542,28 @@ static unsigned long blkif_ring_get_request(struct blkfront_ring_info *rinfo,
>  static int blkif_queue_discard_req(struct request *req, struct blkfront_ring_info *rinfo)
>  {
>  	struct blkfront_info *info = rinfo->dev_info;
> -	struct blkif_request *ring_req;
> +	struct blkif_request ring_req = { 0 };
>  	unsigned long id;
>  
>  	/* Fill out a communications ring structure. */
>  	id = blkif_ring_get_request(rinfo, req, &ring_req);

Maybe I'm missing something obvious here, but you are adding a struct
allocated on the stack to the shadow ring copy, isn't this dangerous?

The pointer stored in the shadow ring copy is going to be invalid
after returning from this function.

The same comment applies to the other calls to blkif_ring_get_request
below that pass a ring_reg allocated on the stack.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Xen-devel] [PATCH 6/6] xen-blkfront: prepare request locally, only then put it on the shared ring
  2018-05-01  8:22     ` Roger Pau Monné
@ 2018-05-01  9:15       ` Roger Pau Monné
  -1 siblings, 0 replies; 31+ messages in thread
From: Roger Pau Monné @ 2018-05-01  9:15 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Marek Marczykowski-Górecki, Juergen Gross, Jens Axboe,
	open list, stable, open list:BLOCK LAYER, xen-devel,
	Boris Ostrovsky

On Tue, May 01, 2018 at 09:22:31AM +0100, Roger Pau Monn� wrote:
> On Mon, Apr 30, 2018 at 11:01:50PM +0200, Marek Marczykowski-G�recki wrote:
> >  					    struct request *req,
> > -					    struct blkif_request **ring_req)
> > +					    struct blkif_request *ring_req)
> >  {
> >  	unsigned long id;
> >  
> > -	*ring_req = RING_GET_REQUEST(&rinfo->ring, rinfo->ring.req_prod_pvt);
> > -	rinfo->ring.req_prod_pvt++;
> > -
> >  	id = get_id_from_freelist(rinfo);
> >  	rinfo->shadow[id].request = req;
> >  	rinfo->shadow[id].status = REQ_WAITING;
> >  	rinfo->shadow[id].associated_id = NO_ASSOCIATED_ID;
> >  
> > -	(*ring_req)->u.rw.id = id;
> > +	ring_req->u.rw.id = id;
> >  
> >  	return id;
> >  }
> > @@ -545,23 +542,28 @@ static unsigned long blkif_ring_get_request(struct blkfront_ring_info *rinfo,
> >  static int blkif_queue_discard_req(struct request *req, struct blkfront_ring_info *rinfo)
> >  {
> >  	struct blkfront_info *info = rinfo->dev_info;
> > -	struct blkif_request *ring_req;
> > +	struct blkif_request ring_req = { 0 };
> >  	unsigned long id;
> >  
> >  	/* Fill out a communications ring structure. */
> >  	id = blkif_ring_get_request(rinfo, req, &ring_req);
> 
> Maybe I'm missing something obvious here, but you are adding a struct
> allocated on the stack to the shadow ring copy, isn't this dangerous?

The above comment is wrong, you are storing a pointer to 'req' in the
shadow ring copy, which is fine and is not the ring request.

Roger.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Xen-devel] [PATCH 6/6] xen-blkfront: prepare request locally, only then put it on the shared ring
@ 2018-05-01  9:15       ` Roger Pau Monné
  0 siblings, 0 replies; 31+ messages in thread
From: Roger Pau Monné @ 2018-05-01  9:15 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Marek Marczykowski-Górecki, Juergen Gross, Jens Axboe,
	open list, stable, open list:BLOCK LAYER, xen-devel,
	Boris Ostrovsky

On Tue, May 01, 2018 at 09:22:31AM +0100, Roger Pau Monné wrote:
> On Mon, Apr 30, 2018 at 11:01:50PM +0200, Marek Marczykowski-Górecki wrote:
> >  					    struct request *req,
> > -					    struct blkif_request **ring_req)
> > +					    struct blkif_request *ring_req)
> >  {
> >  	unsigned long id;
> >  
> > -	*ring_req = RING_GET_REQUEST(&rinfo->ring, rinfo->ring.req_prod_pvt);
> > -	rinfo->ring.req_prod_pvt++;
> > -
> >  	id = get_id_from_freelist(rinfo);
> >  	rinfo->shadow[id].request = req;
> >  	rinfo->shadow[id].status = REQ_WAITING;
> >  	rinfo->shadow[id].associated_id = NO_ASSOCIATED_ID;
> >  
> > -	(*ring_req)->u.rw.id = id;
> > +	ring_req->u.rw.id = id;
> >  
> >  	return id;
> >  }
> > @@ -545,23 +542,28 @@ static unsigned long blkif_ring_get_request(struct blkfront_ring_info *rinfo,
> >  static int blkif_queue_discard_req(struct request *req, struct blkfront_ring_info *rinfo)
> >  {
> >  	struct blkfront_info *info = rinfo->dev_info;
> > -	struct blkif_request *ring_req;
> > +	struct blkif_request ring_req = { 0 };
> >  	unsigned long id;
> >  
> >  	/* Fill out a communications ring structure. */
> >  	id = blkif_ring_get_request(rinfo, req, &ring_req);
> 
> Maybe I'm missing something obvious here, but you are adding a struct
> allocated on the stack to the shadow ring copy, isn't this dangerous?

The above comment is wrong, you are storing a pointer to 'req' in the
shadow ring copy, which is fine and is not the ring request.

Roger.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 6/6] xen-blkfront: prepare request locally, only then put it on the shared ring
  2018-05-01  8:22     ` Roger Pau Monné
  (?)
@ 2018-05-01  9:15     ` Roger Pau Monné
  -1 siblings, 0 replies; 31+ messages in thread
From: Roger Pau Monné @ 2018-05-01  9:15 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Juergen Gross, Jens Axboe, Marek Marczykowski-Górecki,
	stable, open list, open list:BLOCK LAYER, xen-devel,
	Boris Ostrovsky

On Tue, May 01, 2018 at 09:22:31AM +0100, Roger Pau Monné wrote:
> On Mon, Apr 30, 2018 at 11:01:50PM +0200, Marek Marczykowski-Górecki wrote:
> >  					    struct request *req,
> > -					    struct blkif_request **ring_req)
> > +					    struct blkif_request *ring_req)
> >  {
> >  	unsigned long id;
> >  
> > -	*ring_req = RING_GET_REQUEST(&rinfo->ring, rinfo->ring.req_prod_pvt);
> > -	rinfo->ring.req_prod_pvt++;
> > -
> >  	id = get_id_from_freelist(rinfo);
> >  	rinfo->shadow[id].request = req;
> >  	rinfo->shadow[id].status = REQ_WAITING;
> >  	rinfo->shadow[id].associated_id = NO_ASSOCIATED_ID;
> >  
> > -	(*ring_req)->u.rw.id = id;
> > +	ring_req->u.rw.id = id;
> >  
> >  	return id;
> >  }
> > @@ -545,23 +542,28 @@ static unsigned long blkif_ring_get_request(struct blkfront_ring_info *rinfo,
> >  static int blkif_queue_discard_req(struct request *req, struct blkfront_ring_info *rinfo)
> >  {
> >  	struct blkfront_info *info = rinfo->dev_info;
> > -	struct blkif_request *ring_req;
> > +	struct blkif_request ring_req = { 0 };
> >  	unsigned long id;
> >  
> >  	/* Fill out a communications ring structure. */
> >  	id = blkif_ring_get_request(rinfo, req, &ring_req);
> 
> Maybe I'm missing something obvious here, but you are adding a struct
> allocated on the stack to the shadow ring copy, isn't this dangerous?

The above comment is wrong, you are storing a pointer to 'req' in the
shadow ring copy, which is fine and is not the ring request.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Xen-devel] [PATCH 4/6] xen-netfront: add range check for Tx response id
  2018-04-30 21:01 ` [PATCH 4/6] xen-netfront: add range check for Tx response id Marek Marczykowski-Górecki
@ 2018-05-01 10:05     ` Wei Liu
  2018-05-01 10:05     ` Wei Liu
  1 sibling, 0 replies; 31+ messages in thread
From: Wei Liu @ 2018-05-01 10:05 UTC (permalink / raw)
  To: Marek Marczykowski-Górecki
  Cc: xen-devel, Juergen Gross, open list:NETWORKING DRIVERS, stable,
	open list, Boris Ostrovsky, Wei Liu

On Mon, Apr 30, 2018 at 11:01:48PM +0200, Marek Marczykowski-Górecki wrote:
> Tx response ID is fetched from shared page, so make sure it is sane
> before using it as an array index.
> 
> CC: stable@vger.kernel.org
> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
> ---
>  drivers/net/xen-netfront.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index 934b8a4..55c9b25 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -394,6 +394,7 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue)
>  				continue;
>  
>  			id  = txrsp.id;
> +			BUG_ON(id >= NET_TX_RING_SIZE);

It is better to use ARRAY_SIZE here.

Wei.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Xen-devel] [PATCH 4/6] xen-netfront: add range check for Tx response id
@ 2018-05-01 10:05     ` Wei Liu
  0 siblings, 0 replies; 31+ messages in thread
From: Wei Liu @ 2018-05-01 10:05 UTC (permalink / raw)
  To: Marek Marczykowski-Górecki
  Cc: xen-devel, Juergen Gross, open list:NETWORKING DRIVERS, stable,
	open list, Boris Ostrovsky, Wei Liu

On Mon, Apr 30, 2018 at 11:01:48PM +0200, Marek Marczykowski-G�recki wrote:
> Tx response ID is fetched from shared page, so make sure it is sane
> before using it as an array index.
> 
> CC: stable@vger.kernel.org
> Signed-off-by: Marek Marczykowski-G�recki <marmarek@invisiblethingslab.com>
> ---
>  drivers/net/xen-netfront.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index 934b8a4..55c9b25 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -394,6 +394,7 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue)
>  				continue;
>  
>  			id  = txrsp.id;
> +			BUG_ON(id >= NET_TX_RING_SIZE);

It is better to use ARRAY_SIZE here.

Wei.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 4/6] xen-netfront: add range check for Tx response id
  2018-04-30 21:01 ` [PATCH 4/6] xen-netfront: add range check for Tx response id Marek Marczykowski-Górecki
@ 2018-05-01 10:05   ` Wei Liu
  2018-05-01 10:05     ` Wei Liu
  1 sibling, 0 replies; 31+ messages in thread
From: Wei Liu @ 2018-05-01 10:05 UTC (permalink / raw)
  To: Marek Marczykowski-Górecki
  Cc: Juergen Gross, Wei Liu, open list:NETWORKING DRIVERS, open list,
	stable, xen-devel, Boris Ostrovsky

On Mon, Apr 30, 2018 at 11:01:48PM +0200, Marek Marczykowski-Górecki wrote:
> Tx response ID is fetched from shared page, so make sure it is sane
> before using it as an array index.
> 
> CC: stable@vger.kernel.org
> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
> ---
>  drivers/net/xen-netfront.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index 934b8a4..55c9b25 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -394,6 +394,7 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue)
>  				continue;
>  
>  			id  = txrsp.id;
> +			BUG_ON(id >= NET_TX_RING_SIZE);

It is better to use ARRAY_SIZE here.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 0/6] Fix XSA-155-like bugs in frontend drivers
  2018-04-30 21:01 [PATCH 0/6] Fix XSA-155-like bugs in frontend drivers Marek Marczykowski-Górecki
                   ` (8 preceding siblings ...)
  2018-04-30 21:01 ` Marek Marczykowski-Górecki
@ 2018-05-01 10:12 ` Wei Liu
  9 siblings, 0 replies; 31+ messages in thread
From: Wei Liu @ 2018-05-01 10:12 UTC (permalink / raw)
  To: Marek Marczykowski-Górecki
  Cc: Juergen Gross, Stefano Stabellini, Wei Liu, Ian Jackson,
	xen-devel, Boris Ostrovsky, Roger Pau Monné

(Drop Linux lists and people)

On Mon, Apr 30, 2018 at 11:01:44PM +0200, Marek Marczykowski-Górecki wrote:
> Patches in original Xen Security Advisory 155 cared only about backend drivers
> while leaving frontend patches to be "developed and released (publicly) after
> the embargo date". This is said series.
> 
> Marek Marczykowski-Górecki (6):
>   xen: Add RING_COPY_RESPONSE()
>   xen-netfront: copy response out of shared buffer before accessing it
>   xen-netfront: do not use data already exposed to backend
>   xen-netfront: add range check for Tx response id
>   xen-blkfront: make local copy of response before using it
>   xen-blkfront: prepare request locally, only then put it on the shared ring
> 
>  drivers/block/xen-blkfront.c    | 110 ++++++++++++++++++---------------
>  drivers/net/xen-netfront.c      |  61 +++++++++---------
>  include/xen/interface/io/ring.h |  14 ++++-
>  3 files changed, 106 insertions(+), 79 deletions(-)
> 
> base-commit: 6d08b06e67cd117f6992c46611dfb4ce267cd71e

If you're really paranoid you probably also want to consider
implementing more checks for frontend.

See https://xenbits.xen.org/xsa/advisory-39.html for a plethora of
potential issues.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Xen-devel] [PATCH 2/6] xen-netfront: copy response out of shared buffer before accessing it
  2018-04-30 21:01   ` Marek Marczykowski-Górecki
  (?)
  (?)
@ 2018-05-02  5:20   ` Oleksandr Andrushchenko
  -1 siblings, 0 replies; 31+ messages in thread
From: Oleksandr Andrushchenko @ 2018-05-02  5:20 UTC (permalink / raw)
  To: Marek Marczykowski-Górecki, xen-devel
  Cc: Juergen Gross, open list:NETWORKING DRIVERS, stable, open list,
	Boris Ostrovsky

On 05/01/2018 12:01 AM, Marek Marczykowski-Górecki wrote:
> Make local copy of the response, otherwise backend might modify it while
> frontend is already processing it - leading to time of check / time of
> use issue.
>
> This is complementary to XSA155.
>
> Cc: stable@vger.kernel.org
> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
> ---
>   drivers/net/xen-netfront.c | 51 +++++++++++++++++++--------------------
>   1 file changed, 25 insertions(+), 26 deletions(-)
>
> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index 4dd0668..dc99763 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -387,13 +387,13 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue)
>   		rmb(); /* Ensure we see responses up to 'rp'. */
>   
>   		for (cons = queue->tx.rsp_cons; cons != prod; cons++) {
Side comment: the original concern was expressed on the above counters,
will those be addressed as a dedicated series?
> -			struct xen_netif_tx_response *txrsp;
> +			struct xen_netif_tx_response txrsp;
>   
> -			txrsp = RING_GET_RESPONSE(&queue->tx, cons);
> -			if (txrsp->status == XEN_NETIF_RSP_NULL)
> +			RING_COPY_RESPONSE(&queue->tx, cons, &txrsp);
> +			if (txrsp.status == XEN_NETIF_RSP_NULL)
>   				continue;
>   
IMO, there is still no guarantee you access consistent data after this 
change.
What if part of the response was ok when you started copying and
then, in the middle, backend poisons the end of the response?
This seems to be just like minimizing(?) chances to work with inconsistent
data rather than removing the possibility of such completely
> -			id  = txrsp->id;
> +			id  = txrsp.id;
>   			skb = queue->tx_skbs[id].skb;
>   			if (unlikely(gnttab_query_foreign_access(
>   				queue->grant_tx_ref[id]) != 0)) {
> @@ -741,7 +741,7 @@ static int xennet_get_extras(struct netfront_queue *queue,
>   			     RING_IDX rp)
>   
>   {
> -	struct xen_netif_extra_info *extra;
> +	struct xen_netif_extra_info extra;
>   	struct device *dev = &queue->info->netdev->dev;
>   	RING_IDX cons = queue->rx.rsp_cons;
>   	int err = 0;
> @@ -757,24 +757,23 @@ static int xennet_get_extras(struct netfront_queue *queue,
>   			break;
>   		}
>   
> -		extra = (struct xen_netif_extra_info *)
> -			RING_GET_RESPONSE(&queue->rx, ++cons);
> +		RING_COPY_RESPONSE(&queue->rx, ++cons, &extra);
>   
> -		if (unlikely(!extra->type ||
> -			     extra->type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
> +		if (unlikely(!extra.type ||
> +			     extra.type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
>   			if (net_ratelimit())
>   				dev_warn(dev, "Invalid extra type: %d\n",
> -					extra->type);
> +					extra.type);
>   			err = -EINVAL;
>   		} else {
> -			memcpy(&extras[extra->type - 1], extra,
> -			       sizeof(*extra));
> +			memcpy(&extras[extra.type - 1], &extra,
> +			       sizeof(extra));
>   		}
>   
>   		skb = xennet_get_rx_skb(queue, cons);
>   		ref = xennet_get_rx_ref(queue, cons);
>   		xennet_move_rx_slot(queue, skb, ref);
> -	} while (extra->flags & XEN_NETIF_EXTRA_FLAG_MORE);
> +	} while (extra.flags & XEN_NETIF_EXTRA_FLAG_MORE);
>   
>   	queue->rx.rsp_cons = cons;
>   	return err;
> @@ -784,28 +783,28 @@ static int xennet_get_responses(struct netfront_queue *queue,
>   				struct netfront_rx_info *rinfo, RING_IDX rp,
>   				struct sk_buff_head *list)
>   {
> -	struct xen_netif_rx_response *rx = &rinfo->rx;
> +	struct xen_netif_rx_response rx = rinfo->rx;
>   	struct xen_netif_extra_info *extras = rinfo->extras;
>   	struct device *dev = &queue->info->netdev->dev;
>   	RING_IDX cons = queue->rx.rsp_cons;
>   	struct sk_buff *skb = xennet_get_rx_skb(queue, cons);
>   	grant_ref_t ref = xennet_get_rx_ref(queue, cons);
> -	int max = MAX_SKB_FRAGS + (rx->status <= RX_COPY_THRESHOLD);
> +	int max = MAX_SKB_FRAGS + (rx.status <= RX_COPY_THRESHOLD);
>   	int slots = 1;
>   	int err = 0;
>   	unsigned long ret;
>   
> -	if (rx->flags & XEN_NETRXF_extra_info) {
> +	if (rx.flags & XEN_NETRXF_extra_info) {
>   		err = xennet_get_extras(queue, extras, rp);
>   		cons = queue->rx.rsp_cons;
>   	}
>   
>   	for (;;) {
> -		if (unlikely(rx->status < 0 ||
> -			     rx->offset + rx->status > XEN_PAGE_SIZE)) {
> +		if (unlikely(rx.status < 0 ||
> +			     rx.offset + rx.status > XEN_PAGE_SIZE)) {
>   			if (net_ratelimit())
>   				dev_warn(dev, "rx->offset: %u, size: %d\n",
> -					 rx->offset, rx->status);
> +					 rx.offset, rx.status);
>   			xennet_move_rx_slot(queue, skb, ref);
>   			err = -EINVAL;
>   			goto next;
> @@ -819,7 +818,7 @@ static int xennet_get_responses(struct netfront_queue *queue,
>   		if (ref == GRANT_INVALID_REF) {
>   			if (net_ratelimit())
>   				dev_warn(dev, "Bad rx response id %d.\n",
> -					 rx->id);
> +					 rx.id);
>   			err = -EINVAL;
>   			goto next;
>   		}
> @@ -832,7 +831,7 @@ static int xennet_get_responses(struct netfront_queue *queue,
>   		__skb_queue_tail(list, skb);
>   
>   next:
> -		if (!(rx->flags & XEN_NETRXF_more_data))
> +		if (!(rx.flags & XEN_NETRXF_more_data))
>   			break;
>   
>   		if (cons + slots == rp) {
> @@ -842,7 +841,7 @@ static int xennet_get_responses(struct netfront_queue *queue,
>   			break;
>   		}
>   
> -		rx = RING_GET_RESPONSE(&queue->rx, cons + slots);
> +		RING_COPY_RESPONSE(&queue->rx, cons + slots, &rx);
>   		skb = xennet_get_rx_skb(queue, cons + slots);
>   		ref = xennet_get_rx_ref(queue, cons + slots);
>   		slots++;
> @@ -898,9 +897,9 @@ static RING_IDX xennet_fill_frags(struct netfront_queue *queue,
>   	struct sk_buff *nskb;
>   
>   	while ((nskb = __skb_dequeue(list))) {
> -		struct xen_netif_rx_response *rx =
> -			RING_GET_RESPONSE(&queue->rx, ++cons);
> +		struct xen_netif_rx_response rx;
>   		skb_frag_t *nfrag = &skb_shinfo(nskb)->frags[0];
> +		RING_COPY_RESPONSE(&queue->rx, ++cons, &rx);
>   
>   		if (shinfo->nr_frags == MAX_SKB_FRAGS) {
>   			unsigned int pull_to = NETFRONT_SKB_CB(skb)->pull_to;
> @@ -911,7 +910,7 @@ static RING_IDX xennet_fill_frags(struct netfront_queue *queue,
>   		BUG_ON(shinfo->nr_frags >= MAX_SKB_FRAGS);
>   
>   		skb_add_rx_frag(skb, shinfo->nr_frags, skb_frag_page(nfrag),
> -				rx->offset, rx->status, PAGE_SIZE);
> +				rx.offset, rx.status, PAGE_SIZE);
>   
>   		skb_shinfo(nskb)->nr_frags = 0;
>   		kfree_skb(nskb);
> @@ -1007,7 +1006,7 @@ static int xennet_poll(struct napi_struct *napi, int budget)
>   	i = queue->rx.rsp_cons;
>   	work_done = 0;
>   	while ((i != rp) && (work_done < budget)) {
> -		memcpy(rx, RING_GET_RESPONSE(&queue->rx, i), sizeof(*rx));
> +		RING_COPY_RESPONSE(&queue->rx, i, rx);
>   		memset(extras, 0, sizeof(rinfo.extras));
>   
>   		err = xennet_get_responses(queue, &rinfo, rp, &tmpq);

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 2/6] xen-netfront: copy response out of shared buffer before accessing it
  2018-04-30 21:01   ` Marek Marczykowski-Górecki
  (?)
@ 2018-05-02  5:20   ` Oleksandr Andrushchenko
  -1 siblings, 0 replies; 31+ messages in thread
From: Oleksandr Andrushchenko @ 2018-05-02  5:20 UTC (permalink / raw)
  To: Marek Marczykowski-Górecki, xen-devel
  Cc: Juergen Gross, open list:NETWORKING DRIVERS, Boris Ostrovsky,
	open list, stable

On 05/01/2018 12:01 AM, Marek Marczykowski-Górecki wrote:
> Make local copy of the response, otherwise backend might modify it while
> frontend is already processing it - leading to time of check / time of
> use issue.
>
> This is complementary to XSA155.
>
> Cc: stable@vger.kernel.org
> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
> ---
>   drivers/net/xen-netfront.c | 51 +++++++++++++++++++--------------------
>   1 file changed, 25 insertions(+), 26 deletions(-)
>
> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index 4dd0668..dc99763 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -387,13 +387,13 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue)
>   		rmb(); /* Ensure we see responses up to 'rp'. */
>   
>   		for (cons = queue->tx.rsp_cons; cons != prod; cons++) {
Side comment: the original concern was expressed on the above counters,
will those be addressed as a dedicated series?
> -			struct xen_netif_tx_response *txrsp;
> +			struct xen_netif_tx_response txrsp;
>   
> -			txrsp = RING_GET_RESPONSE(&queue->tx, cons);
> -			if (txrsp->status == XEN_NETIF_RSP_NULL)
> +			RING_COPY_RESPONSE(&queue->tx, cons, &txrsp);
> +			if (txrsp.status == XEN_NETIF_RSP_NULL)
>   				continue;
>   
IMO, there is still no guarantee you access consistent data after this 
change.
What if part of the response was ok when you started copying and
then, in the middle, backend poisons the end of the response?
This seems to be just like minimizing(?) chances to work with inconsistent
data rather than removing the possibility of such completely
> -			id  = txrsp->id;
> +			id  = txrsp.id;
>   			skb = queue->tx_skbs[id].skb;
>   			if (unlikely(gnttab_query_foreign_access(
>   				queue->grant_tx_ref[id]) != 0)) {
> @@ -741,7 +741,7 @@ static int xennet_get_extras(struct netfront_queue *queue,
>   			     RING_IDX rp)
>   
>   {
> -	struct xen_netif_extra_info *extra;
> +	struct xen_netif_extra_info extra;
>   	struct device *dev = &queue->info->netdev->dev;
>   	RING_IDX cons = queue->rx.rsp_cons;
>   	int err = 0;
> @@ -757,24 +757,23 @@ static int xennet_get_extras(struct netfront_queue *queue,
>   			break;
>   		}
>   
> -		extra = (struct xen_netif_extra_info *)
> -			RING_GET_RESPONSE(&queue->rx, ++cons);
> +		RING_COPY_RESPONSE(&queue->rx, ++cons, &extra);
>   
> -		if (unlikely(!extra->type ||
> -			     extra->type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
> +		if (unlikely(!extra.type ||
> +			     extra.type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
>   			if (net_ratelimit())
>   				dev_warn(dev, "Invalid extra type: %d\n",
> -					extra->type);
> +					extra.type);
>   			err = -EINVAL;
>   		} else {
> -			memcpy(&extras[extra->type - 1], extra,
> -			       sizeof(*extra));
> +			memcpy(&extras[extra.type - 1], &extra,
> +			       sizeof(extra));
>   		}
>   
>   		skb = xennet_get_rx_skb(queue, cons);
>   		ref = xennet_get_rx_ref(queue, cons);
>   		xennet_move_rx_slot(queue, skb, ref);
> -	} while (extra->flags & XEN_NETIF_EXTRA_FLAG_MORE);
> +	} while (extra.flags & XEN_NETIF_EXTRA_FLAG_MORE);
>   
>   	queue->rx.rsp_cons = cons;
>   	return err;
> @@ -784,28 +783,28 @@ static int xennet_get_responses(struct netfront_queue *queue,
>   				struct netfront_rx_info *rinfo, RING_IDX rp,
>   				struct sk_buff_head *list)
>   {
> -	struct xen_netif_rx_response *rx = &rinfo->rx;
> +	struct xen_netif_rx_response rx = rinfo->rx;
>   	struct xen_netif_extra_info *extras = rinfo->extras;
>   	struct device *dev = &queue->info->netdev->dev;
>   	RING_IDX cons = queue->rx.rsp_cons;
>   	struct sk_buff *skb = xennet_get_rx_skb(queue, cons);
>   	grant_ref_t ref = xennet_get_rx_ref(queue, cons);
> -	int max = MAX_SKB_FRAGS + (rx->status <= RX_COPY_THRESHOLD);
> +	int max = MAX_SKB_FRAGS + (rx.status <= RX_COPY_THRESHOLD);
>   	int slots = 1;
>   	int err = 0;
>   	unsigned long ret;
>   
> -	if (rx->flags & XEN_NETRXF_extra_info) {
> +	if (rx.flags & XEN_NETRXF_extra_info) {
>   		err = xennet_get_extras(queue, extras, rp);
>   		cons = queue->rx.rsp_cons;
>   	}
>   
>   	for (;;) {
> -		if (unlikely(rx->status < 0 ||
> -			     rx->offset + rx->status > XEN_PAGE_SIZE)) {
> +		if (unlikely(rx.status < 0 ||
> +			     rx.offset + rx.status > XEN_PAGE_SIZE)) {
>   			if (net_ratelimit())
>   				dev_warn(dev, "rx->offset: %u, size: %d\n",
> -					 rx->offset, rx->status);
> +					 rx.offset, rx.status);
>   			xennet_move_rx_slot(queue, skb, ref);
>   			err = -EINVAL;
>   			goto next;
> @@ -819,7 +818,7 @@ static int xennet_get_responses(struct netfront_queue *queue,
>   		if (ref == GRANT_INVALID_REF) {
>   			if (net_ratelimit())
>   				dev_warn(dev, "Bad rx response id %d.\n",
> -					 rx->id);
> +					 rx.id);
>   			err = -EINVAL;
>   			goto next;
>   		}
> @@ -832,7 +831,7 @@ static int xennet_get_responses(struct netfront_queue *queue,
>   		__skb_queue_tail(list, skb);
>   
>   next:
> -		if (!(rx->flags & XEN_NETRXF_more_data))
> +		if (!(rx.flags & XEN_NETRXF_more_data))
>   			break;
>   
>   		if (cons + slots == rp) {
> @@ -842,7 +841,7 @@ static int xennet_get_responses(struct netfront_queue *queue,
>   			break;
>   		}
>   
> -		rx = RING_GET_RESPONSE(&queue->rx, cons + slots);
> +		RING_COPY_RESPONSE(&queue->rx, cons + slots, &rx);
>   		skb = xennet_get_rx_skb(queue, cons + slots);
>   		ref = xennet_get_rx_ref(queue, cons + slots);
>   		slots++;
> @@ -898,9 +897,9 @@ static RING_IDX xennet_fill_frags(struct netfront_queue *queue,
>   	struct sk_buff *nskb;
>   
>   	while ((nskb = __skb_dequeue(list))) {
> -		struct xen_netif_rx_response *rx =
> -			RING_GET_RESPONSE(&queue->rx, ++cons);
> +		struct xen_netif_rx_response rx;
>   		skb_frag_t *nfrag = &skb_shinfo(nskb)->frags[0];
> +		RING_COPY_RESPONSE(&queue->rx, ++cons, &rx);
>   
>   		if (shinfo->nr_frags == MAX_SKB_FRAGS) {
>   			unsigned int pull_to = NETFRONT_SKB_CB(skb)->pull_to;
> @@ -911,7 +910,7 @@ static RING_IDX xennet_fill_frags(struct netfront_queue *queue,
>   		BUG_ON(shinfo->nr_frags >= MAX_SKB_FRAGS);
>   
>   		skb_add_rx_frag(skb, shinfo->nr_frags, skb_frag_page(nfrag),
> -				rx->offset, rx->status, PAGE_SIZE);
> +				rx.offset, rx.status, PAGE_SIZE);
>   
>   		skb_shinfo(nskb)->nr_frags = 0;
>   		kfree_skb(nskb);
> @@ -1007,7 +1006,7 @@ static int xennet_poll(struct napi_struct *napi, int budget)
>   	i = queue->rx.rsp_cons;
>   	work_done = 0;
>   	while ((i != rp) && (work_done < budget)) {
> -		memcpy(rx, RING_GET_RESPONSE(&queue->rx, i), sizeof(*rx));
> +		RING_COPY_RESPONSE(&queue->rx, i, rx);
>   		memset(extras, 0, sizeof(rinfo.extras));
>   
>   		err = xennet_get_responses(queue, &rinfo, rp, &tmpq);


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2018-05-02  5:21 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-04-30 21:01 [PATCH 0/6] Fix XSA-155-like bugs in frontend drivers Marek Marczykowski-Górecki
2018-04-30 21:01 ` [PATCH 1/6] xen: Add RING_COPY_RESPONSE() Marek Marczykowski-Górecki
2018-04-30 21:25   ` Boris Ostrovsky
2018-04-30 21:25   ` Boris Ostrovsky
2018-04-30 21:27     ` Marek Marczykowski-Górecki
2018-04-30 21:41       ` Boris Ostrovsky
2018-04-30 21:41       ` Boris Ostrovsky
2018-04-30 21:27     ` Marek Marczykowski-Górecki
2018-04-30 21:01 ` Marek Marczykowski-Górecki
2018-04-30 21:01 ` [PATCH 2/6] xen-netfront: copy response out of shared buffer before accessing it Marek Marczykowski-Górecki
2018-04-30 21:01   ` Marek Marczykowski-Górecki
2018-05-02  5:20   ` Oleksandr Andrushchenko
2018-05-02  5:20   ` [Xen-devel] " Oleksandr Andrushchenko
2018-04-30 21:01 ` [PATCH 3/6] xen-netfront: do not use data already exposed to backend Marek Marczykowski-Górecki
2018-04-30 21:01   ` Marek Marczykowski-Górecki
2018-04-30 21:01 ` [PATCH 4/6] xen-netfront: add range check for Tx response id Marek Marczykowski-Górecki
2018-05-01 10:05   ` Wei Liu
2018-05-01 10:05   ` [Xen-devel] " Wei Liu
2018-05-01 10:05     ` Wei Liu
2018-04-30 21:01 ` Marek Marczykowski-Górecki
2018-04-30 21:01 ` [PATCH 5/6] xen-blkfront: make local copy of response before using it Marek Marczykowski-Górecki
2018-04-30 21:01   ` Marek Marczykowski-Górecki
2018-04-30 21:01 ` [PATCH 6/6] xen-blkfront: prepare request locally, only then put it on the shared ring Marek Marczykowski-Górecki
2018-04-30 21:01 ` Marek Marczykowski-Górecki
2018-05-01  8:22   ` Roger Pau Monné
2018-05-01  8:22   ` Roger Pau Monné
2018-05-01  8:22     ` Roger Pau Monné
2018-05-01  9:15     ` Roger Pau Monné
2018-05-01  9:15     ` [Xen-devel] " Roger Pau Monné
2018-05-01  9:15       ` Roger Pau Monné
2018-05-01 10:12 ` [PATCH 0/6] Fix XSA-155-like bugs in frontend drivers Wei Liu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.