bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf-next v4 00/17] AF_XDP infrastructure improvements and mlx5e support
@ 2019-06-12 15:56 Maxim Mikityanskiy
  2019-06-12 15:56 ` [PATCH bpf-next v4 01/17] net/mlx5e: Attach/detach XDP program safely Maxim Mikityanskiy
                   ` (18 more replies)
  0 siblings, 19 replies; 48+ messages in thread
From: Maxim Mikityanskiy @ 2019-06-12 15:56 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Björn Töpel,
	Magnus Karlsson
  Cc: bpf, netdev, David S. Miller, Saeed Mahameed, Jonathan Lemon,
	Tariq Toukan, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jakub Kicinski, Maciej Fijalkowski, Maxim Mikityanskiy

This series contains improvements to the AF_XDP kernel infrastructure
and AF_XDP support in mlx5e. The infrastructure improvements are
required for mlx5e, but also some of them benefit to all drivers, and
some can be useful for other drivers that want to implement AF_XDP.

The performance testing was performed on a machine with the following
configuration:

- 24 cores of Intel Xeon E5-2620 v3 @ 2.40 GHz
- Mellanox ConnectX-5 Ex with 100 Gbit/s link

The results with retpoline disabled, single stream:

txonly: 33.3 Mpps (21.5 Mpps with queue and app pinned to the same CPU)
rxdrop: 12.2 Mpps
l2fwd: 9.4 Mpps

The results with retpoline enabled, single stream:

txonly: 21.3 Mpps (14.1 Mpps with queue and app pinned to the same CPU)
rxdrop: 9.9 Mpps
l2fwd: 6.8 Mpps

v2 changes:

Added patches for mlx5e and addressed the comments for v1. Rebased for
bpf-next.

v3 changes:

Rebased for the newer bpf-next, resolved conflicts in libbpf. Addressed
Björn's comments for coding style. Fixed a bug in error handling flow in
mlx5e_open_xsk.

v4 changes:

UAPI is not changed, XSK RX queues are exposed to the kernel. The lower
half of the available amount of RX queues are regular queues, and the
upper half are XSK RX queues. The patch "xsk: Extend channels to support
combined XSK/non-XSK traffic" was dropped. The final patch was reworked
accordingly.

Added "net/mlx5e: Attach/detach XDP program safely", as the changes
introduced in the XSK patch base on the stuff from this one.

Added "libbpf: Support drivers with non-combined channels", which aligns
the condition in libbpf with the condition in the kernel.

Rebased over the newer bpf-next.

Maxim Mikityanskiy (17):
  net/mlx5e: Attach/detach XDP program safely
  xsk: Add API to check for available entries in FQ
  xsk: Add getsockopt XDP_OPTIONS
  libbpf: Support getsockopt XDP_OPTIONS
  xsk: Change the default frame size to 4096 and allow controlling it
  xsk: Return the whole xdp_desc from xsk_umem_consume_tx
  libbpf: Support drivers with non-combined channels
  net/mlx5e: Replace deprecated PCI_DMA_TODEVICE
  net/mlx5e: Calculate linear RX frag size considering XSK
  net/mlx5e: Allow ICO SQ to be used by multiple RQs
  net/mlx5e: Refactor struct mlx5e_xdp_info
  net/mlx5e: Share the XDP SQ for XDP_TX between RQs
  net/mlx5e: XDP_TX from UMEM support
  net/mlx5e: Consider XSK in XDP MTU limit calculation
  net/mlx5e: Encapsulate open/close queues into a function
  net/mlx5e: Move queue param structs to en/params.h
  net/mlx5e: Add XSK zero-copy support

 drivers/net/ethernet/intel/i40e/i40e_xsk.c    |  12 +-
 drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c  |  15 +-
 .../net/ethernet/mellanox/mlx5/core/Makefile  |   2 +-
 drivers/net/ethernet/mellanox/mlx5/core/en.h  | 155 +++-
 .../ethernet/mellanox/mlx5/core/en/params.c   | 108 ++-
 .../ethernet/mellanox/mlx5/core/en/params.h   | 118 ++-
 .../net/ethernet/mellanox/mlx5/core/en/xdp.c  | 231 ++++--
 .../net/ethernet/mellanox/mlx5/core/en/xdp.h  |  36 +-
 .../mellanox/mlx5/core/en/xsk/Makefile        |   1 +
 .../ethernet/mellanox/mlx5/core/en/xsk/rx.c   | 192 +++++
 .../ethernet/mellanox/mlx5/core/en/xsk/rx.h   |  27 +
 .../mellanox/mlx5/core/en/xsk/setup.c         | 223 ++++++
 .../mellanox/mlx5/core/en/xsk/setup.h         |  25 +
 .../ethernet/mellanox/mlx5/core/en/xsk/tx.c   | 111 +++
 .../ethernet/mellanox/mlx5/core/en/xsk/tx.h   |  15 +
 .../ethernet/mellanox/mlx5/core/en/xsk/umem.c | 267 +++++++
 .../ethernet/mellanox/mlx5/core/en/xsk/umem.h |  31 +
 .../ethernet/mellanox/mlx5/core/en_ethtool.c  |  29 +-
 .../mellanox/mlx5/core/en_fs_ethtool.c        |  18 +-
 .../net/ethernet/mellanox/mlx5/core/en_main.c | 726 ++++++++++++------
 .../net/ethernet/mellanox/mlx5/core/en_rep.c  |  12 +-
 .../net/ethernet/mellanox/mlx5/core/en_rx.c   | 104 ++-
 .../ethernet/mellanox/mlx5/core/en_stats.c    | 115 ++-
 .../ethernet/mellanox/mlx5/core/en_stats.h    |  30 +
 .../net/ethernet/mellanox/mlx5/core/en_txrx.c |  42 +-
 .../ethernet/mellanox/mlx5/core/ipoib/ipoib.c |  14 +-
 drivers/net/ethernet/mellanox/mlx5/core/wq.h  |   5 -
 include/net/xdp_sock.h                        |  27 +-
 include/uapi/linux/if_xdp.h                   |   8 +
 net/xdp/xsk.c                                 |  36 +-
 net/xdp/xsk_queue.h                           |  14 +
 samples/bpf/xdpsock_user.c                    |  44 +-
 tools/include/uapi/linux/if_xdp.h             |   8 +
 tools/lib/bpf/xsk.c                           |  18 +-
 tools/lib/bpf/xsk.h                           |   2 +-
 35 files changed, 2337 insertions(+), 484 deletions(-)
 create mode 100644 drivers/net/ethernet/mellanox/mlx5/core/en/xsk/Makefile
 create mode 100644 drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c
 create mode 100644 drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h
 create mode 100644 drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c
 create mode 100644 drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.h
 create mode 100644 drivers/net/ethernet/mellanox/mlx5/core/en/xsk/tx.c
 create mode 100644 drivers/net/ethernet/mellanox/mlx5/core/en/xsk/tx.h
 create mode 100644 drivers/net/ethernet/mellanox/mlx5/core/en/xsk/umem.c
 create mode 100644 drivers/net/ethernet/mellanox/mlx5/core/en/xsk/umem.h

-- 
2.19.1


^ permalink raw reply	[flat|nested] 48+ messages in thread

* [PATCH bpf-next v4 01/17] net/mlx5e: Attach/detach XDP program safely
  2019-06-12 15:56 [PATCH bpf-next v4 00/17] AF_XDP infrastructure improvements and mlx5e support Maxim Mikityanskiy
@ 2019-06-12 15:56 ` Maxim Mikityanskiy
  2019-06-12 15:56 ` [PATCH bpf-next v4 02/17] xsk: Add API to check for available entries in FQ Maxim Mikityanskiy
                   ` (17 subsequent siblings)
  18 siblings, 0 replies; 48+ messages in thread
From: Maxim Mikityanskiy @ 2019-06-12 15:56 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Björn Töpel,
	Magnus Karlsson
  Cc: bpf, netdev, David S. Miller, Saeed Mahameed, Jonathan Lemon,
	Tariq Toukan, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jakub Kicinski, Maciej Fijalkowski, Maxim Mikityanskiy

When an XDP program is set, a full reopen of all channels happens in two
cases:

1. When there was no program set, and a new one is being set.

2. When there was a program set, but it's being unset.

The full reopen is necessary, because the channel parameters may change
if XDP is enabled or disabled. However, it's performed in an unsafe way:
if the new channels fail to open, the old ones are already closed, and
the interface goes down. Use the safe way to switch channels instead.
The same way is already used for other configuration changes.

Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
---
 .../net/ethernet/mellanox/mlx5/core/en_main.c | 31 ++++++++++++-------
 1 file changed, 20 insertions(+), 11 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index c65cefd84eda..3e54b1f33587 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -4192,8 +4192,6 @@ static int mlx5e_xdp_set(struct net_device *netdev, struct bpf_prog *prog)
 	/* no need for full reset when exchanging programs */
 	reset = (!priv->channels.params.xdp_prog || !prog);
 
-	if (was_opened && reset)
-		mlx5e_close_locked(netdev);
 	if (was_opened && !reset) {
 		/* num_channels is invariant here, so we can take the
 		 * batched reference right upfront.
@@ -4205,20 +4203,31 @@ static int mlx5e_xdp_set(struct net_device *netdev, struct bpf_prog *prog)
 		}
 	}
 
-	/* exchange programs, extra prog reference we got from caller
-	 * as long as we don't fail from this point onwards.
-	 */
-	old_prog = xchg(&priv->channels.params.xdp_prog, prog);
+	if (was_opened && reset) {
+		struct mlx5e_channels new_channels = {};
+
+		new_channels.params = priv->channels.params;
+		new_channels.params.xdp_prog = prog;
+		mlx5e_set_rq_type(priv->mdev, &new_channels.params);
+		old_prog = priv->channels.params.xdp_prog;
+
+		err = mlx5e_safe_switch_channels(priv, &new_channels, NULL);
+		if (err)
+			goto unlock;
+	} else {
+		/* exchange programs, extra prog reference we got from caller
+		 * as long as we don't fail from this point onwards.
+		 */
+		old_prog = xchg(&priv->channels.params.xdp_prog, prog);
+	}
+
 	if (old_prog)
 		bpf_prog_put(old_prog);
 
-	if (reset) /* change RQ type according to priv->xdp_prog */
+	if (!was_opened && reset) /* change RQ type according to priv->xdp_prog */
 		mlx5e_set_rq_type(priv->mdev, &priv->channels.params);
 
-	if (was_opened && reset)
-		err = mlx5e_open_locked(netdev);
-
-	if (!test_bit(MLX5E_STATE_OPENED, &priv->state) || reset)
+	if (!was_opened || reset)
 		goto unlock;
 
 	/* exchanging programs w/o reset, we update ref counts on behalf
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH bpf-next v4 02/17] xsk: Add API to check for available entries in FQ
  2019-06-12 15:56 [PATCH bpf-next v4 00/17] AF_XDP infrastructure improvements and mlx5e support Maxim Mikityanskiy
  2019-06-12 15:56 ` [PATCH bpf-next v4 01/17] net/mlx5e: Attach/detach XDP program safely Maxim Mikityanskiy
@ 2019-06-12 15:56 ` Maxim Mikityanskiy
  2019-06-13 12:50   ` Björn Töpel
  2019-06-12 15:56 ` [PATCH bpf-next v4 03/17] xsk: Add getsockopt XDP_OPTIONS Maxim Mikityanskiy
                   ` (16 subsequent siblings)
  18 siblings, 1 reply; 48+ messages in thread
From: Maxim Mikityanskiy @ 2019-06-12 15:56 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Björn Töpel,
	Magnus Karlsson
  Cc: bpf, netdev, David S. Miller, Saeed Mahameed, Jonathan Lemon,
	Tariq Toukan, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jakub Kicinski, Maciej Fijalkowski, Maxim Mikityanskiy

Add a function that checks whether the Fill Ring has the specified
amount of descriptors available. It will be useful for mlx5e that wants
to check in advance, whether it can allocate a bulk of RX descriptors,
to get the best performance.

Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Acked-by: Saeed Mahameed <saeedm@mellanox.com>
---
 include/net/xdp_sock.h | 21 +++++++++++++++++++++
 net/xdp/xsk.c          |  6 ++++++
 net/xdp/xsk_queue.h    | 14 ++++++++++++++
 3 files changed, 41 insertions(+)

diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h
index ae0f368a62bb..b6f5ebae43a1 100644
--- a/include/net/xdp_sock.h
+++ b/include/net/xdp_sock.h
@@ -77,6 +77,7 @@ int xsk_rcv(struct xdp_sock *xs, struct xdp_buff *xdp);
 void xsk_flush(struct xdp_sock *xs);
 bool xsk_is_setup_for_bpf_map(struct xdp_sock *xs);
 /* Used from netdev driver */
+bool xsk_umem_has_addrs(struct xdp_umem *umem, u32 cnt);
 u64 *xsk_umem_peek_addr(struct xdp_umem *umem, u64 *addr);
 void xsk_umem_discard_addr(struct xdp_umem *umem);
 void xsk_umem_complete_tx(struct xdp_umem *umem, u32 nb_entries);
@@ -99,6 +100,16 @@ static inline dma_addr_t xdp_umem_get_dma(struct xdp_umem *umem, u64 addr)
 }
 
 /* Reuse-queue aware version of FILL queue helpers */
+static inline bool xsk_umem_has_addrs_rq(struct xdp_umem *umem, u32 cnt)
+{
+	struct xdp_umem_fq_reuse *rq = umem->fq_reuse;
+
+	if (rq->length >= cnt)
+		return true;
+
+	return xsk_umem_has_addrs(umem, cnt - rq->length);
+}
+
 static inline u64 *xsk_umem_peek_addr_rq(struct xdp_umem *umem, u64 *addr)
 {
 	struct xdp_umem_fq_reuse *rq = umem->fq_reuse;
@@ -146,6 +157,11 @@ static inline bool xsk_is_setup_for_bpf_map(struct xdp_sock *xs)
 	return false;
 }
 
+static inline bool xsk_umem_has_addrs(struct xdp_umem *umem, u32 cnt)
+{
+	return false;
+}
+
 static inline u64 *xsk_umem_peek_addr(struct xdp_umem *umem, u64 *addr)
 {
 	return NULL;
@@ -200,6 +216,11 @@ static inline dma_addr_t xdp_umem_get_dma(struct xdp_umem *umem, u64 addr)
 	return 0;
 }
 
+static inline bool xsk_umem_has_addrs_rq(struct xdp_umem *umem, u32 cnt)
+{
+	return false;
+}
+
 static inline u64 *xsk_umem_peek_addr_rq(struct xdp_umem *umem, u64 *addr)
 {
 	return NULL;
diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
index a14e8864e4fa..b68a380f50b3 100644
--- a/net/xdp/xsk.c
+++ b/net/xdp/xsk.c
@@ -37,6 +37,12 @@ bool xsk_is_setup_for_bpf_map(struct xdp_sock *xs)
 		READ_ONCE(xs->umem->fq);
 }
 
+bool xsk_umem_has_addrs(struct xdp_umem *umem, u32 cnt)
+{
+	return xskq_has_addrs(umem->fq, cnt);
+}
+EXPORT_SYMBOL(xsk_umem_has_addrs);
+
 u64 *xsk_umem_peek_addr(struct xdp_umem *umem, u64 *addr)
 {
 	return xskq_peek_addr(umem->fq, addr);
diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h
index 88b9ae24658d..12b49784a6d5 100644
--- a/net/xdp/xsk_queue.h
+++ b/net/xdp/xsk_queue.h
@@ -117,6 +117,20 @@ static inline u32 xskq_nb_free(struct xsk_queue *q, u32 producer, u32 dcnt)
 	return q->nentries - (producer - q->cons_tail);
 }
 
+static inline bool xskq_has_addrs(struct xsk_queue *q, u32 cnt)
+{
+	u32 entries = q->prod_tail - q->cons_tail;
+
+	if (entries >= cnt)
+		return true;
+
+	/* Refresh the local pointer. */
+	q->prod_tail = READ_ONCE(q->ring->producer);
+	entries = q->prod_tail - q->cons_tail;
+
+	return entries >= cnt;
+}
+
 /* UMEM queue */
 
 static inline bool xskq_is_valid_addr(struct xsk_queue *q, u64 addr)
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH bpf-next v4 03/17] xsk: Add getsockopt XDP_OPTIONS
  2019-06-12 15:56 [PATCH bpf-next v4 00/17] AF_XDP infrastructure improvements and mlx5e support Maxim Mikityanskiy
  2019-06-12 15:56 ` [PATCH bpf-next v4 01/17] net/mlx5e: Attach/detach XDP program safely Maxim Mikityanskiy
  2019-06-12 15:56 ` [PATCH bpf-next v4 02/17] xsk: Add API to check for available entries in FQ Maxim Mikityanskiy
@ 2019-06-12 15:56 ` Maxim Mikityanskiy
  2019-06-13 12:50   ` Björn Töpel
  2019-06-12 15:56 ` [PATCH bpf-next v4 04/17] libbpf: Support " Maxim Mikityanskiy
                   ` (15 subsequent siblings)
  18 siblings, 1 reply; 48+ messages in thread
From: Maxim Mikityanskiy @ 2019-06-12 15:56 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Björn Töpel,
	Magnus Karlsson
  Cc: bpf, netdev, David S. Miller, Saeed Mahameed, Jonathan Lemon,
	Tariq Toukan, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jakub Kicinski, Maciej Fijalkowski, Maxim Mikityanskiy

Make it possible for the application to determine whether the AF_XDP
socket is running in zero-copy mode. To achieve this, add a new
getsockopt option XDP_OPTIONS that returns flags. The only flag
supported for now is the zero-copy mode indicator.

Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Acked-by: Saeed Mahameed <saeedm@mellanox.com>
---
 include/uapi/linux/if_xdp.h       |  8 ++++++++
 net/xdp/xsk.c                     | 20 ++++++++++++++++++++
 tools/include/uapi/linux/if_xdp.h |  8 ++++++++
 3 files changed, 36 insertions(+)

diff --git a/include/uapi/linux/if_xdp.h b/include/uapi/linux/if_xdp.h
index caed8b1614ff..faaa5ca2a117 100644
--- a/include/uapi/linux/if_xdp.h
+++ b/include/uapi/linux/if_xdp.h
@@ -46,6 +46,7 @@ struct xdp_mmap_offsets {
 #define XDP_UMEM_FILL_RING		5
 #define XDP_UMEM_COMPLETION_RING	6
 #define XDP_STATISTICS			7
+#define XDP_OPTIONS			8
 
 struct xdp_umem_reg {
 	__u64 addr; /* Start of packet data area */
@@ -60,6 +61,13 @@ struct xdp_statistics {
 	__u64 tx_invalid_descs; /* Dropped due to invalid descriptor */
 };
 
+struct xdp_options {
+	__u32 flags;
+};
+
+/* Flags for the flags field of struct xdp_options */
+#define XDP_OPTIONS_ZEROCOPY (1 << 0)
+
 /* Pgoff for mmaping the rings */
 #define XDP_PGOFF_RX_RING			  0
 #define XDP_PGOFF_TX_RING		 0x80000000
diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
index b68a380f50b3..35ca531ac74e 100644
--- a/net/xdp/xsk.c
+++ b/net/xdp/xsk.c
@@ -650,6 +650,26 @@ static int xsk_getsockopt(struct socket *sock, int level, int optname,
 
 		return 0;
 	}
+	case XDP_OPTIONS:
+	{
+		struct xdp_options opts = {};
+
+		if (len < sizeof(opts))
+			return -EINVAL;
+
+		mutex_lock(&xs->mutex);
+		if (xs->zc)
+			opts.flags |= XDP_OPTIONS_ZEROCOPY;
+		mutex_unlock(&xs->mutex);
+
+		len = sizeof(opts);
+		if (copy_to_user(optval, &opts, len))
+			return -EFAULT;
+		if (put_user(len, optlen))
+			return -EFAULT;
+
+		return 0;
+	}
 	default:
 		break;
 	}
diff --git a/tools/include/uapi/linux/if_xdp.h b/tools/include/uapi/linux/if_xdp.h
index caed8b1614ff..faaa5ca2a117 100644
--- a/tools/include/uapi/linux/if_xdp.h
+++ b/tools/include/uapi/linux/if_xdp.h
@@ -46,6 +46,7 @@ struct xdp_mmap_offsets {
 #define XDP_UMEM_FILL_RING		5
 #define XDP_UMEM_COMPLETION_RING	6
 #define XDP_STATISTICS			7
+#define XDP_OPTIONS			8
 
 struct xdp_umem_reg {
 	__u64 addr; /* Start of packet data area */
@@ -60,6 +61,13 @@ struct xdp_statistics {
 	__u64 tx_invalid_descs; /* Dropped due to invalid descriptor */
 };
 
+struct xdp_options {
+	__u32 flags;
+};
+
+/* Flags for the flags field of struct xdp_options */
+#define XDP_OPTIONS_ZEROCOPY (1 << 0)
+
 /* Pgoff for mmaping the rings */
 #define XDP_PGOFF_RX_RING			  0
 #define XDP_PGOFF_TX_RING		 0x80000000
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH bpf-next v4 04/17] libbpf: Support getsockopt XDP_OPTIONS
  2019-06-12 15:56 [PATCH bpf-next v4 00/17] AF_XDP infrastructure improvements and mlx5e support Maxim Mikityanskiy
                   ` (2 preceding siblings ...)
  2019-06-12 15:56 ` [PATCH bpf-next v4 03/17] xsk: Add getsockopt XDP_OPTIONS Maxim Mikityanskiy
@ 2019-06-12 15:56 ` Maxim Mikityanskiy
  2019-06-13 12:51   ` Björn Töpel
  2019-06-12 15:56 ` [PATCH bpf-next v4 05/17] xsk: Change the default frame size to 4096 and allow controlling it Maxim Mikityanskiy
                   ` (14 subsequent siblings)
  18 siblings, 1 reply; 48+ messages in thread
From: Maxim Mikityanskiy @ 2019-06-12 15:56 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Björn Töpel,
	Magnus Karlsson
  Cc: bpf, netdev, David S. Miller, Saeed Mahameed, Jonathan Lemon,
	Tariq Toukan, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jakub Kicinski, Maciej Fijalkowski, Maxim Mikityanskiy

Query XDP_OPTIONS in libbpf to determine if the zero-copy mode is active
or not.

Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Acked-by: Saeed Mahameed <saeedm@mellanox.com>
---
 tools/lib/bpf/xsk.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/tools/lib/bpf/xsk.c b/tools/lib/bpf/xsk.c
index 7ef6293b4fd7..bf15a80a37c2 100644
--- a/tools/lib/bpf/xsk.c
+++ b/tools/lib/bpf/xsk.c
@@ -65,6 +65,7 @@ struct xsk_socket {
 	int xsks_map_fd;
 	__u32 queue_id;
 	char ifname[IFNAMSIZ];
+	bool zc;
 };
 
 struct xsk_nl_info {
@@ -480,6 +481,7 @@ int xsk_socket__create(struct xsk_socket **xsk_ptr, const char *ifname,
 	void *rx_map = NULL, *tx_map = NULL;
 	struct sockaddr_xdp sxdp = {};
 	struct xdp_mmap_offsets off;
+	struct xdp_options opts;
 	struct xsk_socket *xsk;
 	socklen_t optlen;
 	int err;
@@ -597,6 +599,16 @@ int xsk_socket__create(struct xsk_socket **xsk_ptr, const char *ifname,
 	}
 
 	xsk->prog_fd = -1;
+
+	optlen = sizeof(opts);
+	err = getsockopt(xsk->fd, SOL_XDP, XDP_OPTIONS, &opts, &optlen);
+	if (err) {
+		err = -errno;
+		goto out_mmap_tx;
+	}
+
+	xsk->zc = opts.flags & XDP_OPTIONS_ZEROCOPY;
+
 	if (!(xsk->config.libbpf_flags & XSK_LIBBPF_FLAGS__INHIBIT_PROG_LOAD)) {
 		err = xsk_setup_xdp_prog(xsk);
 		if (err)
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH bpf-next v4 05/17] xsk: Change the default frame size to 4096 and allow controlling it
  2019-06-12 15:56 [PATCH bpf-next v4 00/17] AF_XDP infrastructure improvements and mlx5e support Maxim Mikityanskiy
                   ` (3 preceding siblings ...)
  2019-06-12 15:56 ` [PATCH bpf-next v4 04/17] libbpf: Support " Maxim Mikityanskiy
@ 2019-06-12 15:56 ` Maxim Mikityanskiy
  2019-06-12 20:10   ` Jakub Kicinski
  2019-06-12 15:56 ` [PATCH bpf-next v4 06/17] xsk: Return the whole xdp_desc from xsk_umem_consume_tx Maxim Mikityanskiy
                   ` (13 subsequent siblings)
  18 siblings, 1 reply; 48+ messages in thread
From: Maxim Mikityanskiy @ 2019-06-12 15:56 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Björn Töpel,
	Magnus Karlsson
  Cc: bpf, netdev, David S. Miller, Saeed Mahameed, Jonathan Lemon,
	Tariq Toukan, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jakub Kicinski, Maciej Fijalkowski, Maxim Mikityanskiy

The typical XDP memory scheme is one packet per page. Change the AF_XDP
frame size in libbpf to 4096, which is the page size on x86, to allow
libbpf to be used with the drivers with the packet-per-page scheme.

Add a command line option -f to xdpsock to allow to specify a custom
frame size.

Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Acked-by: Saeed Mahameed <saeedm@mellanox.com>
---
 samples/bpf/xdpsock_user.c | 44 ++++++++++++++++++++++++--------------
 tools/lib/bpf/xsk.h        |  2 +-
 2 files changed, 29 insertions(+), 17 deletions(-)

diff --git a/samples/bpf/xdpsock_user.c b/samples/bpf/xdpsock_user.c
index d08ee1ab7bb4..86d173a332c1 100644
--- a/samples/bpf/xdpsock_user.c
+++ b/samples/bpf/xdpsock_user.c
@@ -68,6 +68,7 @@ static int opt_queue;
 static int opt_poll;
 static int opt_interval = 1;
 static u32 opt_xdp_bind_flags;
+static int opt_xsk_frame_size = XSK_UMEM__DEFAULT_FRAME_SIZE;
 static __u32 prog_id;
 
 struct xsk_umem_info {
@@ -276,6 +277,12 @@ static size_t gen_eth_frame(struct xsk_umem_info *umem, u64 addr)
 static struct xsk_umem_info *xsk_configure_umem(void *buffer, u64 size)
 {
 	struct xsk_umem_info *umem;
+	struct xsk_umem_config cfg = {
+		.fill_size = XSK_RING_PROD__DEFAULT_NUM_DESCS,
+		.comp_size = XSK_RING_CONS__DEFAULT_NUM_DESCS,
+		.frame_size = opt_xsk_frame_size,
+		.frame_headroom = XSK_UMEM__DEFAULT_FRAME_HEADROOM,
+	};
 	int ret;
 
 	umem = calloc(1, sizeof(*umem));
@@ -283,7 +290,7 @@ static struct xsk_umem_info *xsk_configure_umem(void *buffer, u64 size)
 		exit_with_error(errno);
 
 	ret = xsk_umem__create(&umem->umem, buffer, size, &umem->fq, &umem->cq,
-			       NULL);
+			       &cfg);
 	if (ret)
 		exit_with_error(-ret);
 
@@ -323,11 +330,9 @@ static struct xsk_socket_info *xsk_configure_socket(struct xsk_umem_info *umem)
 				     &idx);
 	if (ret != XSK_RING_PROD__DEFAULT_NUM_DESCS)
 		exit_with_error(-ret);
-	for (i = 0;
-	     i < XSK_RING_PROD__DEFAULT_NUM_DESCS *
-		     XSK_UMEM__DEFAULT_FRAME_SIZE;
-	     i += XSK_UMEM__DEFAULT_FRAME_SIZE)
-		*xsk_ring_prod__fill_addr(&xsk->umem->fq, idx++) = i;
+	for (i = 0; i < XSK_RING_PROD__DEFAULT_NUM_DESCS; i++)
+		*xsk_ring_prod__fill_addr(&xsk->umem->fq, idx++) =
+			i * opt_xsk_frame_size;
 	xsk_ring_prod__submit(&xsk->umem->fq,
 			      XSK_RING_PROD__DEFAULT_NUM_DESCS);
 
@@ -346,6 +351,7 @@ static struct option long_options[] = {
 	{"interval", required_argument, 0, 'n'},
 	{"zero-copy", no_argument, 0, 'z'},
 	{"copy", no_argument, 0, 'c'},
+	{"frame-size", required_argument, 0, 'f'},
 	{0, 0, 0, 0}
 };
 
@@ -365,8 +371,9 @@ static void usage(const char *prog)
 		"  -n, --interval=n	Specify statistics update interval (default 1 sec).\n"
 		"  -z, --zero-copy      Force zero-copy mode.\n"
 		"  -c, --copy           Force copy mode.\n"
+		"  -f, --frame-size=n   Set the frame size (must be a power of two, default is %d).\n"
 		"\n";
-	fprintf(stderr, str, prog);
+	fprintf(stderr, str, prog, XSK_UMEM__DEFAULT_FRAME_SIZE);
 	exit(EXIT_FAILURE);
 }
 
@@ -377,7 +384,7 @@ static void parse_command_line(int argc, char **argv)
 	opterr = 0;
 
 	for (;;) {
-		c = getopt_long(argc, argv, "Frtli:q:psSNn:cz", long_options,
+		c = getopt_long(argc, argv, "Frtli:q:psSNn:czf:", long_options,
 				&option_index);
 		if (c == -1)
 			break;
@@ -420,6 +427,9 @@ static void parse_command_line(int argc, char **argv)
 		case 'F':
 			opt_xdp_flags &= ~XDP_FLAGS_UPDATE_IF_NOEXIST;
 			break;
+		case 'f':
+			opt_xsk_frame_size = atoi(optarg);
+			break;
 		default:
 			usage(basename(argv[0]));
 		}
@@ -432,6 +442,11 @@ static void parse_command_line(int argc, char **argv)
 		usage(basename(argv[0]));
 	}
 
+	if (opt_xsk_frame_size & (opt_xsk_frame_size - 1)) {
+		fprintf(stderr, "--frame-size=%d is not a power of two\n",
+			opt_xsk_frame_size);
+		usage(basename(argv[0]));
+	}
 }
 
 static void kick_tx(struct xsk_socket_info *xsk)
@@ -583,8 +598,7 @@ static void tx_only(struct xsk_socket_info *xsk)
 
 			for (i = 0; i < BATCH_SIZE; i++) {
 				xsk_ring_prod__tx_desc(&xsk->tx, idx + i)->addr
-					= (frame_nb + i) <<
-					XSK_UMEM__DEFAULT_FRAME_SHIFT;
+					= (frame_nb + i) * opt_xsk_frame_size;
 				xsk_ring_prod__tx_desc(&xsk->tx, idx + i)->len =
 					sizeof(pkt_data) - 1;
 			}
@@ -661,21 +675,19 @@ int main(int argc, char **argv)
 	}
 
 	ret = posix_memalign(&bufs, getpagesize(), /* PAGE_SIZE aligned */
-			     NUM_FRAMES * XSK_UMEM__DEFAULT_FRAME_SIZE);
+			     NUM_FRAMES * opt_xsk_frame_size);
 	if (ret)
 		exit_with_error(ret);
 
        /* Create sockets... */
-	umem = xsk_configure_umem(bufs,
-				  NUM_FRAMES * XSK_UMEM__DEFAULT_FRAME_SIZE);
+	umem = xsk_configure_umem(bufs, NUM_FRAMES * opt_xsk_frame_size);
 	xsks[num_socks++] = xsk_configure_socket(umem);
 
 	if (opt_bench == BENCH_TXONLY) {
 		int i;
 
-		for (i = 0; i < NUM_FRAMES * XSK_UMEM__DEFAULT_FRAME_SIZE;
-		     i += XSK_UMEM__DEFAULT_FRAME_SIZE)
-			(void)gen_eth_frame(umem, i);
+		for (i = 0; i < NUM_FRAMES; i++)
+			(void)gen_eth_frame(umem, i * opt_xsk_frame_size);
 	}
 
 	signal(SIGINT, int_exit);
diff --git a/tools/lib/bpf/xsk.h b/tools/lib/bpf/xsk.h
index 82ea71a0f3ec..833a6e60d065 100644
--- a/tools/lib/bpf/xsk.h
+++ b/tools/lib/bpf/xsk.h
@@ -167,7 +167,7 @@ LIBBPF_API int xsk_socket__fd(const struct xsk_socket *xsk);
 
 #define XSK_RING_CONS__DEFAULT_NUM_DESCS      2048
 #define XSK_RING_PROD__DEFAULT_NUM_DESCS      2048
-#define XSK_UMEM__DEFAULT_FRAME_SHIFT    11 /* 2048 bytes */
+#define XSK_UMEM__DEFAULT_FRAME_SHIFT    12 /* 4096 bytes */
 #define XSK_UMEM__DEFAULT_FRAME_SIZE     (1 << XSK_UMEM__DEFAULT_FRAME_SHIFT)
 #define XSK_UMEM__DEFAULT_FRAME_HEADROOM 0
 
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH bpf-next v4 06/17] xsk: Return the whole xdp_desc from xsk_umem_consume_tx
  2019-06-12 15:56 [PATCH bpf-next v4 00/17] AF_XDP infrastructure improvements and mlx5e support Maxim Mikityanskiy
                   ` (4 preceding siblings ...)
  2019-06-12 15:56 ` [PATCH bpf-next v4 05/17] xsk: Change the default frame size to 4096 and allow controlling it Maxim Mikityanskiy
@ 2019-06-12 15:56 ` Maxim Mikityanskiy
  2019-06-13 12:48   ` Björn Töpel
  2019-06-12 15:56 ` [PATCH bpf-next v4 07/17] libbpf: Support drivers with non-combined channels Maxim Mikityanskiy
                   ` (12 subsequent siblings)
  18 siblings, 1 reply; 48+ messages in thread
From: Maxim Mikityanskiy @ 2019-06-12 15:56 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Björn Töpel,
	Magnus Karlsson
  Cc: bpf, netdev, David S. Miller, Saeed Mahameed, Jonathan Lemon,
	Tariq Toukan, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jakub Kicinski, Maciej Fijalkowski, Maxim Mikityanskiy

Some drivers want to access the data transmitted in order to implement
acceleration features of the NICs. It is also useful in AF_XDP TX flow.

Change the xsk_umem_consume_tx API to return the whole xdp_desc, that
contains the data pointer, length and DMA address, instead of only the
latter two. Adapt the implementation of i40e and ixgbe to this change.

Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Acked-by: Saeed Mahameed <saeedm@mellanox.com>
Cc: Björn Töpel <bjorn.topel@intel.com>
Cc: Magnus Karlsson <magnus.karlsson@intel.com>
---
 drivers/net/ethernet/intel/i40e/i40e_xsk.c   | 12 +++++++-----
 drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c | 15 +++++++++------
 include/net/xdp_sock.h                       |  6 +++---
 net/xdp/xsk.c                                | 10 +++-------
 4 files changed, 22 insertions(+), 21 deletions(-)

diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.c b/drivers/net/ethernet/intel/i40e/i40e_xsk.c
index 1b17486543ac..eae6fafad1b8 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_xsk.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.c
@@ -640,8 +640,8 @@ static bool i40e_xmit_zc(struct i40e_ring *xdp_ring, unsigned int budget)
 	struct i40e_tx_desc *tx_desc = NULL;
 	struct i40e_tx_buffer *tx_bi;
 	bool work_done = true;
+	struct xdp_desc desc;
 	dma_addr_t dma;
-	u32 len;
 
 	while (budget-- > 0) {
 		if (!unlikely(I40E_DESC_UNUSED(xdp_ring))) {
@@ -650,21 +650,23 @@ static bool i40e_xmit_zc(struct i40e_ring *xdp_ring, unsigned int budget)
 			break;
 		}
 
-		if (!xsk_umem_consume_tx(xdp_ring->xsk_umem, &dma, &len))
+		if (!xsk_umem_consume_tx(xdp_ring->xsk_umem, &desc))
 			break;
 
-		dma_sync_single_for_device(xdp_ring->dev, dma, len,
+		dma = xdp_umem_get_dma(xdp_ring->xsk_umem, desc.addr);
+
+		dma_sync_single_for_device(xdp_ring->dev, dma, desc.len,
 					   DMA_BIDIRECTIONAL);
 
 		tx_bi = &xdp_ring->tx_bi[xdp_ring->next_to_use];
-		tx_bi->bytecount = len;
+		tx_bi->bytecount = desc.len;
 
 		tx_desc = I40E_TX_DESC(xdp_ring, xdp_ring->next_to_use);
 		tx_desc->buffer_addr = cpu_to_le64(dma);
 		tx_desc->cmd_type_offset_bsz =
 			build_ctob(I40E_TX_DESC_CMD_ICRC
 				   | I40E_TX_DESC_CMD_EOP,
-				   0, len, 0);
+				   0, desc.len, 0);
 
 		xdp_ring->next_to_use++;
 		if (xdp_ring->next_to_use == xdp_ring->count)
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
index bfe95ce0bd7f..0297a70a4e2d 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
@@ -621,8 +621,9 @@ static bool ixgbe_xmit_zc(struct ixgbe_ring *xdp_ring, unsigned int budget)
 	union ixgbe_adv_tx_desc *tx_desc = NULL;
 	struct ixgbe_tx_buffer *tx_bi;
 	bool work_done = true;
-	u32 len, cmd_type;
+	struct xdp_desc desc;
 	dma_addr_t dma;
+	u32 cmd_type;
 
 	while (budget-- > 0) {
 		if (unlikely(!ixgbe_desc_unused(xdp_ring)) ||
@@ -631,14 +632,16 @@ static bool ixgbe_xmit_zc(struct ixgbe_ring *xdp_ring, unsigned int budget)
 			break;
 		}
 
-		if (!xsk_umem_consume_tx(xdp_ring->xsk_umem, &dma, &len))
+		if (!xsk_umem_consume_tx(xdp_ring->xsk_umem, &desc))
 			break;
 
-		dma_sync_single_for_device(xdp_ring->dev, dma, len,
+		dma = xdp_umem_get_dma(xdp_ring->xsk_umem, desc.addr);
+
+		dma_sync_single_for_device(xdp_ring->dev, dma, desc.len,
 					   DMA_BIDIRECTIONAL);
 
 		tx_bi = &xdp_ring->tx_buffer_info[xdp_ring->next_to_use];
-		tx_bi->bytecount = len;
+		tx_bi->bytecount = desc.len;
 		tx_bi->xdpf = NULL;
 
 		tx_desc = IXGBE_TX_DESC(xdp_ring, xdp_ring->next_to_use);
@@ -648,10 +651,10 @@ static bool ixgbe_xmit_zc(struct ixgbe_ring *xdp_ring, unsigned int budget)
 		cmd_type = IXGBE_ADVTXD_DTYP_DATA |
 			   IXGBE_ADVTXD_DCMD_DEXT |
 			   IXGBE_ADVTXD_DCMD_IFCS;
-		cmd_type |= len | IXGBE_TXD_CMD;
+		cmd_type |= desc.len | IXGBE_TXD_CMD;
 		tx_desc->read.cmd_type_len = cpu_to_le32(cmd_type);
 		tx_desc->read.olinfo_status =
-			cpu_to_le32(len << IXGBE_ADVTXD_PAYLEN_SHIFT);
+			cpu_to_le32(desc.len << IXGBE_ADVTXD_PAYLEN_SHIFT);
 
 		xdp_ring->next_to_use++;
 		if (xdp_ring->next_to_use == xdp_ring->count)
diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h
index b6f5ebae43a1..057b159ff8b9 100644
--- a/include/net/xdp_sock.h
+++ b/include/net/xdp_sock.h
@@ -81,7 +81,7 @@ bool xsk_umem_has_addrs(struct xdp_umem *umem, u32 cnt);
 u64 *xsk_umem_peek_addr(struct xdp_umem *umem, u64 *addr);
 void xsk_umem_discard_addr(struct xdp_umem *umem);
 void xsk_umem_complete_tx(struct xdp_umem *umem, u32 nb_entries);
-bool xsk_umem_consume_tx(struct xdp_umem *umem, dma_addr_t *dma, u32 *len);
+bool xsk_umem_consume_tx(struct xdp_umem *umem, struct xdp_desc *desc);
 void xsk_umem_consume_tx_done(struct xdp_umem *umem);
 struct xdp_umem_fq_reuse *xsk_reuseq_prepare(u32 nentries);
 struct xdp_umem_fq_reuse *xsk_reuseq_swap(struct xdp_umem *umem,
@@ -175,8 +175,8 @@ static inline void xsk_umem_complete_tx(struct xdp_umem *umem, u32 nb_entries)
 {
 }
 
-static inline bool xsk_umem_consume_tx(struct xdp_umem *umem, dma_addr_t *dma,
-				       u32 *len)
+static inline bool xsk_umem_consume_tx(struct xdp_umem *umem,
+				       struct xdp_desc *desc)
 {
 	return false;
 }
diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
index 35ca531ac74e..74417a851ed5 100644
--- a/net/xdp/xsk.c
+++ b/net/xdp/xsk.c
@@ -172,22 +172,18 @@ void xsk_umem_consume_tx_done(struct xdp_umem *umem)
 }
 EXPORT_SYMBOL(xsk_umem_consume_tx_done);
 
-bool xsk_umem_consume_tx(struct xdp_umem *umem, dma_addr_t *dma, u32 *len)
+bool xsk_umem_consume_tx(struct xdp_umem *umem, struct xdp_desc *desc)
 {
-	struct xdp_desc desc;
 	struct xdp_sock *xs;
 
 	rcu_read_lock();
 	list_for_each_entry_rcu(xs, &umem->xsk_list, list) {
-		if (!xskq_peek_desc(xs->tx, &desc))
+		if (!xskq_peek_desc(xs->tx, desc))
 			continue;
 
-		if (xskq_produce_addr_lazy(umem->cq, desc.addr))
+		if (xskq_produce_addr_lazy(umem->cq, desc->addr))
 			goto out;
 
-		*dma = xdp_umem_get_dma(umem, desc.addr);
-		*len = desc.len;
-
 		xskq_discard_desc(xs->tx);
 		rcu_read_unlock();
 		return true;
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH bpf-next v4 07/17] libbpf: Support drivers with non-combined channels
  2019-06-12 15:56 [PATCH bpf-next v4 00/17] AF_XDP infrastructure improvements and mlx5e support Maxim Mikityanskiy
                   ` (5 preceding siblings ...)
  2019-06-12 15:56 ` [PATCH bpf-next v4 06/17] xsk: Return the whole xdp_desc from xsk_umem_consume_tx Maxim Mikityanskiy
@ 2019-06-12 15:56 ` Maxim Mikityanskiy
  2019-06-12 20:23   ` Jakub Kicinski
  2019-06-12 15:56 ` [PATCH bpf-next v4 08/17] net/mlx5e: Replace deprecated PCI_DMA_TODEVICE Maxim Mikityanskiy
                   ` (11 subsequent siblings)
  18 siblings, 1 reply; 48+ messages in thread
From: Maxim Mikityanskiy @ 2019-06-12 15:56 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Björn Töpel,
	Magnus Karlsson
  Cc: bpf, netdev, David S. Miller, Saeed Mahameed, Jonathan Lemon,
	Tariq Toukan, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jakub Kicinski, Maciej Fijalkowski, Maxim Mikityanskiy

Currently, libbpf uses the number of combined channels as the maximum
queue number. However, the kernel has a different limitation:

- xdp_reg_umem_at_qid() allows up to max(RX queues, TX queues).

- ethtool_set_channels() checks for UMEMs in queues up to
  combined_count + max(rx_count, tx_count).

libbpf shouldn't limit applications to a lower max queue number. Account
for non-combined RX and TX channels when calculating the max queue
number. Use the same formula that is used in ethtool.

Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Acked-by: Saeed Mahameed <saeedm@mellanox.com>
---
 tools/lib/bpf/xsk.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/tools/lib/bpf/xsk.c b/tools/lib/bpf/xsk.c
index bf15a80a37c2..86107857e1f0 100644
--- a/tools/lib/bpf/xsk.c
+++ b/tools/lib/bpf/xsk.c
@@ -334,13 +334,13 @@ static int xsk_get_max_queues(struct xsk_socket *xsk)
 		goto out;
 	}
 
-	if (channels.max_combined == 0 || errno == EOPNOTSUPP)
+	ret = channels.max_combined + max(channels.max_rx, channels.max_tx);
+
+	if (ret == 0 || errno == EOPNOTSUPP)
 		/* If the device says it has no channels, then all traffic
 		 * is sent to a single stream, so max queues = 1.
 		 */
 		ret = 1;
-	else
-		ret = channels.max_combined;
 
 out:
 	close(fd);
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH bpf-next v4 08/17] net/mlx5e: Replace deprecated PCI_DMA_TODEVICE
  2019-06-12 15:56 [PATCH bpf-next v4 00/17] AF_XDP infrastructure improvements and mlx5e support Maxim Mikityanskiy
                   ` (6 preceding siblings ...)
  2019-06-12 15:56 ` [PATCH bpf-next v4 07/17] libbpf: Support drivers with non-combined channels Maxim Mikityanskiy
@ 2019-06-12 15:56 ` Maxim Mikityanskiy
  2019-06-12 15:56 ` [PATCH bpf-next v4 09/17] net/mlx5e: Calculate linear RX frag size considering XSK Maxim Mikityanskiy
                   ` (10 subsequent siblings)
  18 siblings, 0 replies; 48+ messages in thread
From: Maxim Mikityanskiy @ 2019-06-12 15:56 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Björn Töpel,
	Magnus Karlsson
  Cc: bpf, netdev, David S. Miller, Saeed Mahameed, Jonathan Lemon,
	Tariq Toukan, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jakub Kicinski, Maciej Fijalkowski, Maxim Mikityanskiy

The PCI API for DMA is deprecated, and PCI_DMA_TODEVICE is just defined
to DMA_TO_DEVICE for backward compatibility. Just use DMA_TO_DEVICE.

Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Acked-by: Saeed Mahameed <saeedm@mellanox.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
index eb8ef78e5626..5a900b70b203 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
@@ -64,7 +64,7 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_dma_info *di,
 		return false;
 	xdpi.dma_addr = di->addr + (xdpi.xdpf->data - (void *)xdpi.xdpf);
 	dma_sync_single_for_device(sq->pdev, xdpi.dma_addr,
-				   xdpi.xdpf->len, PCI_DMA_TODEVICE);
+				   xdpi.xdpf->len, DMA_TO_DEVICE);
 	xdpi.di = *di;
 
 	return sq->xmit_xdp_frame(sq, &xdpi);
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH bpf-next v4 09/17] net/mlx5e: Calculate linear RX frag size considering XSK
  2019-06-12 15:56 [PATCH bpf-next v4 00/17] AF_XDP infrastructure improvements and mlx5e support Maxim Mikityanskiy
                   ` (7 preceding siblings ...)
  2019-06-12 15:56 ` [PATCH bpf-next v4 08/17] net/mlx5e: Replace deprecated PCI_DMA_TODEVICE Maxim Mikityanskiy
@ 2019-06-12 15:56 ` Maxim Mikityanskiy
  2019-06-12 15:56 ` [PATCH bpf-next v4 10/17] net/mlx5e: Allow ICO SQ to be used by multiple RQs Maxim Mikityanskiy
                   ` (9 subsequent siblings)
  18 siblings, 0 replies; 48+ messages in thread
From: Maxim Mikityanskiy @ 2019-06-12 15:56 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Björn Töpel,
	Magnus Karlsson
  Cc: bpf, netdev, David S. Miller, Saeed Mahameed, Jonathan Lemon,
	Tariq Toukan, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jakub Kicinski, Maciej Fijalkowski, Maxim Mikityanskiy

Additional conditions introduced:

- XSK implies XDP.
- Headroom includes the XSK headroom if it exists.
- No space is reserved for struct shared_skb_info in XSK mode.
- Fragment size smaller than the XSK chunk size is not allowed.

A new auxiliary function mlx5e_get_linear_rq_headroom with the support
for XSK is introduced. Use this function in the implementation of
mlx5e_get_rq_headroom. Change headroom to u32 to match the headroom
field in struct xdp_umem.

Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Acked-by: Saeed Mahameed <saeedm@mellanox.com>
---
 .../ethernet/mellanox/mlx5/core/en/params.c   | 65 +++++++++++++------
 .../ethernet/mellanox/mlx5/core/en/params.h   |  8 ++-
 .../net/ethernet/mellanox/mlx5/core/en_main.c |  2 +-
 3 files changed, 52 insertions(+), 23 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
index d3744bffbae3..50a458dc3836 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
@@ -3,33 +3,62 @@
 
 #include "en/params.h"
 
-u32 mlx5e_rx_get_linear_frag_sz(struct mlx5e_params *params)
+static inline bool mlx5e_rx_is_xdp(struct mlx5e_params *params,
+				   struct mlx5e_xsk_param *xsk)
 {
-	u16 hw_mtu = MLX5E_SW2HW_MTU(params, params->sw_mtu);
-	u16 linear_rq_headroom = params->xdp_prog ?
-		XDP_PACKET_HEADROOM : MLX5_RX_HEADROOM;
-	u32 frag_sz;
+	return params->xdp_prog || xsk;
+}
+
+static inline u16 mlx5e_get_linear_rq_headroom(struct mlx5e_params *params,
+					       struct mlx5e_xsk_param *xsk)
+{
+	u16 headroom = NET_IP_ALIGN;
+
+	if (mlx5e_rx_is_xdp(params, xsk)) {
+		headroom += XDP_PACKET_HEADROOM;
+		if (xsk)
+			headroom += xsk->headroom;
+	} else {
+		headroom += MLX5_RX_HEADROOM;
+	}
+
+	return headroom;
+}
+
+u32 mlx5e_rx_get_linear_frag_sz(struct mlx5e_params *params,
+				struct mlx5e_xsk_param *xsk)
+{
+	u32 hw_mtu = MLX5E_SW2HW_MTU(params, params->sw_mtu);
+	u16 linear_rq_headroom = mlx5e_get_linear_rq_headroom(params, xsk);
+	u32 frag_sz = linear_rq_headroom + hw_mtu;
 
-	linear_rq_headroom += NET_IP_ALIGN;
+	/* AF_XDP doesn't build SKBs in place. */
+	if (!xsk)
+		frag_sz = MLX5_SKB_FRAG_SZ(frag_sz);
 
-	frag_sz = MLX5_SKB_FRAG_SZ(linear_rq_headroom + hw_mtu);
+	/* XDP in mlx5e doesn't support multiple packets per page. */
+	if (mlx5e_rx_is_xdp(params, xsk))
+		frag_sz = max_t(u32, frag_sz, PAGE_SIZE);
 
-	if (params->xdp_prog && frag_sz < PAGE_SIZE)
-		frag_sz = PAGE_SIZE;
+	/* Even if we can go with a smaller fragment size, we must not put
+	 * multiple packets into a single frame.
+	 */
+	if (xsk)
+		frag_sz = max_t(u32, frag_sz, xsk->chunk_size);
 
 	return frag_sz;
 }
 
 u8 mlx5e_mpwqe_log_pkts_per_wqe(struct mlx5e_params *params)
 {
-	u32 linear_frag_sz = mlx5e_rx_get_linear_frag_sz(params);
+	u32 linear_frag_sz = mlx5e_rx_get_linear_frag_sz(params, NULL);
 
 	return MLX5_MPWRQ_LOG_WQE_SZ - order_base_2(linear_frag_sz);
 }
 
 bool mlx5e_rx_is_linear_skb(struct mlx5e_params *params)
 {
-	u32 frag_sz = mlx5e_rx_get_linear_frag_sz(params);
+	u32 frag_sz = mlx5e_rx_get_linear_frag_sz(params, NULL);
 
 	return !params->lro_en && frag_sz <= PAGE_SIZE;
 }
@@ -39,7 +68,7 @@ bool mlx5e_rx_is_linear_skb(struct mlx5e_params *params)
 bool mlx5e_rx_mpwqe_is_linear_skb(struct mlx5_core_dev *mdev,
 				  struct mlx5e_params *params)
 {
-	u32 frag_sz = mlx5e_rx_get_linear_frag_sz(params);
+	u32 frag_sz = mlx5e_rx_get_linear_frag_sz(params, NULL);
 	s8 signed_log_num_strides_param;
 	u8 log_num_strides;
 
@@ -75,7 +104,7 @@ u8 mlx5e_mpwqe_get_log_stride_size(struct mlx5_core_dev *mdev,
 				   struct mlx5e_params *params)
 {
 	if (mlx5e_rx_mpwqe_is_linear_skb(mdev, params))
-		return order_base_2(mlx5e_rx_get_linear_frag_sz(params));
+		return order_base_2(mlx5e_rx_get_linear_frag_sz(params, NULL));
 
 	return MLX5_MPWRQ_DEF_LOG_STRIDE_SZ(mdev);
 }
@@ -90,15 +119,9 @@ u8 mlx5e_mpwqe_get_log_num_strides(struct mlx5_core_dev *mdev,
 u16 mlx5e_get_rq_headroom(struct mlx5_core_dev *mdev,
 			  struct mlx5e_params *params)
 {
-	u16 linear_rq_headroom = params->xdp_prog ?
-		XDP_PACKET_HEADROOM : MLX5_RX_HEADROOM;
-	bool is_linear_skb;
-
-	linear_rq_headroom += NET_IP_ALIGN;
-
-	is_linear_skb = (params->rq_wq_type == MLX5_WQ_TYPE_CYCLIC) ?
+	bool is_linear_skb = (params->rq_wq_type == MLX5_WQ_TYPE_CYCLIC) ?
 		mlx5e_rx_is_linear_skb(params) :
 		mlx5e_rx_mpwqe_is_linear_skb(mdev, params);
 
-	return is_linear_skb ? linear_rq_headroom : 0;
+	return is_linear_skb ? mlx5e_get_linear_rq_headroom(params, NULL) : 0;
 }
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.h b/drivers/net/ethernet/mellanox/mlx5/core/en/params.h
index b106a0236f36..ed420f3efe52 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.h
@@ -6,7 +6,13 @@
 
 #include "en.h"
 
-u32 mlx5e_rx_get_linear_frag_sz(struct mlx5e_params *params);
+struct mlx5e_xsk_param {
+	u16 headroom;
+	u16 chunk_size;
+};
+
+u32 mlx5e_rx_get_linear_frag_sz(struct mlx5e_params *params,
+				struct mlx5e_xsk_param *xsk);
 u8 mlx5e_mpwqe_log_pkts_per_wqe(struct mlx5e_params *params);
 bool mlx5e_rx_is_linear_skb(struct mlx5e_params *params);
 bool mlx5e_rx_mpwqe_is_linear_skb(struct mlx5_core_dev *mdev,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index 3e54b1f33587..35d9f5f9f7cf 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -1955,7 +1955,7 @@ static void mlx5e_build_rq_frags_info(struct mlx5_core_dev *mdev,
 	if (mlx5e_rx_is_linear_skb(params)) {
 		int frag_stride;
 
-		frag_stride = mlx5e_rx_get_linear_frag_sz(params);
+		frag_stride = mlx5e_rx_get_linear_frag_sz(params, NULL);
 		frag_stride = roundup_pow_of_two(frag_stride);
 
 		info->arr[0].frag_size = byte_count;
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH bpf-next v4 10/17] net/mlx5e: Allow ICO SQ to be used by multiple RQs
  2019-06-12 15:56 [PATCH bpf-next v4 00/17] AF_XDP infrastructure improvements and mlx5e support Maxim Mikityanskiy
                   ` (8 preceding siblings ...)
  2019-06-12 15:56 ` [PATCH bpf-next v4 09/17] net/mlx5e: Calculate linear RX frag size considering XSK Maxim Mikityanskiy
@ 2019-06-12 15:56 ` Maxim Mikityanskiy
  2019-06-12 15:56 ` [PATCH bpf-next v4 11/17] net/mlx5e: Refactor struct mlx5e_xdp_info Maxim Mikityanskiy
                   ` (8 subsequent siblings)
  18 siblings, 0 replies; 48+ messages in thread
From: Maxim Mikityanskiy @ 2019-06-12 15:56 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Björn Töpel,
	Magnus Karlsson
  Cc: bpf, netdev, David S. Miller, Saeed Mahameed, Jonathan Lemon,
	Tariq Toukan, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jakub Kicinski, Maciej Fijalkowski, Maxim Mikityanskiy

Prepare to creation of the XSK RQ, which will require posting UMRs, too.
The same ICO SQ will be used for both RQs and also to trigger interrupts
by posting NOPs. UMR WQEs can't be reused any more. Optimization
introduced in commit ab966d7e4ff98 ("net/mlx5e: RX, Recycle buffer of
UMR WQEs") is reverted.

Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Acked-by: Saeed Mahameed <saeedm@mellanox.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/en.h  |  9 +++++++
 .../net/ethernet/mellanox/mlx5/core/en_rx.c   | 27 +++++++------------
 .../net/ethernet/mellanox/mlx5/core/en_txrx.c |  4 ++-
 drivers/net/ethernet/mellanox/mlx5/core/wq.h  |  5 ----
 4 files changed, 22 insertions(+), 23 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
index 3a183d690e23..41e22763007c 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
@@ -348,6 +348,13 @@ enum {
 
 struct mlx5e_sq_wqe_info {
 	u8  opcode;
+
+	/* Auxiliary data for different opcodes. */
+	union {
+		struct {
+			struct mlx5e_rq *rq;
+		} umr;
+	};
 };
 
 struct mlx5e_txqsq {
@@ -570,6 +577,7 @@ struct mlx5e_rq {
 			u8                     log_stride_sz;
 			u8                     umr_in_progress;
 			u8                     umr_last_bulk;
+			u8                     umr_completed;
 		} mpwqe;
 	};
 	struct {
@@ -797,6 +805,7 @@ void mlx5e_page_release(struct mlx5e_rq *rq, struct mlx5e_dma_info *dma_info,
 void mlx5e_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe);
 void mlx5e_handle_rx_cqe_mpwrq(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe);
 bool mlx5e_post_rx_wqes(struct mlx5e_rq *rq);
+void mlx5e_poll_ico_cq(struct mlx5e_cq *cq);
 bool mlx5e_post_rx_mpwqes(struct mlx5e_rq *rq);
 void mlx5e_dealloc_rx_wqe(struct mlx5e_rq *rq, u16 ix);
 void mlx5e_dealloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
index 13133e7f088e..5d762da6bf9b 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
@@ -425,11 +425,6 @@ static void mlx5e_post_rx_mpwqe(struct mlx5e_rq *rq, u8 n)
 	mlx5_wq_ll_update_db_record(wq);
 }
 
-static inline u16 mlx5e_icosq_wrap_cnt(struct mlx5e_icosq *sq)
-{
-	return mlx5_wq_cyc_get_ctr_wrap_cnt(&sq->wq, sq->pc);
-}
-
 static inline void mlx5e_fill_icosq_frag_edge(struct mlx5e_icosq *sq,
 					      struct mlx5_wq_cyc *wq,
 					      u16 pi, u16 nnops)
@@ -465,9 +460,7 @@ static int mlx5e_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix)
 	}
 
 	umr_wqe = mlx5_wq_cyc_get_wqe(wq, pi);
-	if (unlikely(mlx5e_icosq_wrap_cnt(sq) < 2))
-		memcpy(umr_wqe, &rq->mpwqe.umr_wqe,
-		       offsetof(struct mlx5e_umr_wqe, inline_mtts));
+	memcpy(umr_wqe, &rq->mpwqe.umr_wqe, offsetof(struct mlx5e_umr_wqe, inline_mtts));
 
 	for (i = 0; i < MLX5_MPWRQ_PAGES_PER_WQE; i++, dma_info++) {
 		err = mlx5e_page_alloc_mapped(rq, dma_info);
@@ -485,6 +478,7 @@ static int mlx5e_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix)
 	umr_wqe->uctrl.xlt_offset = cpu_to_be16(xlt_offset);
 
 	sq->db.ico_wqe[pi].opcode = MLX5_OPCODE_UMR;
+	sq->db.ico_wqe[pi].umr.rq = rq;
 	sq->pc += MLX5E_UMR_WQEBBS;
 
 	sq->doorbell_cseg = &umr_wqe->ctrl;
@@ -542,11 +536,10 @@ bool mlx5e_post_rx_wqes(struct mlx5e_rq *rq)
 	return !!err;
 }
 
-static void mlx5e_poll_ico_cq(struct mlx5e_cq *cq, struct mlx5e_rq *rq)
+void mlx5e_poll_ico_cq(struct mlx5e_cq *cq)
 {
 	struct mlx5e_icosq *sq = container_of(cq, struct mlx5e_icosq, cq);
 	struct mlx5_cqe64 *cqe;
-	u8  completed_umr = 0;
 	u16 sqcc;
 	int i;
 
@@ -587,7 +580,7 @@ static void mlx5e_poll_ico_cq(struct mlx5e_cq *cq, struct mlx5e_rq *rq)
 
 			if (likely(wi->opcode == MLX5_OPCODE_UMR)) {
 				sqcc += MLX5E_UMR_WQEBBS;
-				completed_umr++;
+				wi->umr.rq->mpwqe.umr_completed++;
 			} else if (likely(wi->opcode == MLX5_OPCODE_NOP)) {
 				sqcc++;
 			} else {
@@ -603,24 +596,24 @@ static void mlx5e_poll_ico_cq(struct mlx5e_cq *cq, struct mlx5e_rq *rq)
 	sq->cc = sqcc;
 
 	mlx5_cqwq_update_db_record(&cq->wq);
-
-	if (likely(completed_umr)) {
-		mlx5e_post_rx_mpwqe(rq, completed_umr);
-		rq->mpwqe.umr_in_progress -= completed_umr;
-	}
 }
 
 bool mlx5e_post_rx_mpwqes(struct mlx5e_rq *rq)
 {
 	struct mlx5e_icosq *sq = &rq->channel->icosq;
 	struct mlx5_wq_ll *wq = &rq->mpwqe.wq;
+	u8  umr_completed = rq->mpwqe.umr_completed;
 	u8  missing, i;
 	u16 head;
 
 	if (unlikely(!test_bit(MLX5E_RQ_STATE_ENABLED, &rq->state)))
 		return false;
 
-	mlx5e_poll_ico_cq(&sq->cq, rq);
+	if (umr_completed) {
+		mlx5e_post_rx_mpwqe(rq, umr_completed);
+		rq->mpwqe.umr_in_progress -= umr_completed;
+		rq->mpwqe.umr_completed = 0;
+	}
 
 	missing = mlx5_wq_ll_missing(wq) - rq->mpwqe.umr_in_progress;
 
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c
index f9862bf75491..de4d5ae431af 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c
@@ -107,7 +107,9 @@ int mlx5e_napi_poll(struct napi_struct *napi, int budget)
 		busy |= work_done == budget;
 	}
 
-	busy |= c->rq.post_wqes(rq);
+	mlx5e_poll_ico_cq(&c->icosq.cq);
+
+	busy |= rq->post_wqes(rq);
 
 	if (busy) {
 		if (likely(mlx5e_channel_no_affinity_change(c)))
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/wq.h b/drivers/net/ethernet/mellanox/mlx5/core/wq.h
index 1f87cce421e0..f1ec58c9e9e3 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/wq.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/wq.h
@@ -134,11 +134,6 @@ static inline void mlx5_wq_cyc_update_db_record(struct mlx5_wq_cyc *wq)
 	*wq->db = cpu_to_be32(wq->wqe_ctr);
 }
 
-static inline u16 mlx5_wq_cyc_get_ctr_wrap_cnt(struct mlx5_wq_cyc *wq, u16 ctr)
-{
-	return ctr >> wq->fbc.log_sz;
-}
-
 static inline u16 mlx5_wq_cyc_ctr2ix(struct mlx5_wq_cyc *wq, u16 ctr)
 {
 	return ctr & wq->fbc.sz_m1;
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH bpf-next v4 11/17] net/mlx5e: Refactor struct mlx5e_xdp_info
  2019-06-12 15:56 [PATCH bpf-next v4 00/17] AF_XDP infrastructure improvements and mlx5e support Maxim Mikityanskiy
                   ` (9 preceding siblings ...)
  2019-06-12 15:56 ` [PATCH bpf-next v4 10/17] net/mlx5e: Allow ICO SQ to be used by multiple RQs Maxim Mikityanskiy
@ 2019-06-12 15:56 ` Maxim Mikityanskiy
  2019-06-12 15:56 ` [PATCH bpf-next v4 12/17] net/mlx5e: Share the XDP SQ for XDP_TX between RQs Maxim Mikityanskiy
                   ` (7 subsequent siblings)
  18 siblings, 0 replies; 48+ messages in thread
From: Maxim Mikityanskiy @ 2019-06-12 15:56 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Björn Töpel,
	Magnus Karlsson
  Cc: bpf, netdev, David S. Miller, Saeed Mahameed, Jonathan Lemon,
	Tariq Toukan, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jakub Kicinski, Maciej Fijalkowski, Maxim Mikityanskiy

Currently, struct mlx5e_xdp_info has some issues that have to be cleaned
up before the upcoming AF_XDP support makes things too complicated and
messy. This structure is used both when sending the packet and on
completion. Moreover, the cleanup procedure on completion depends on the
origin of the packet (XDP_REDIRECT, XDP_TX). Adding AF_XDP support will
add new flows that use this structure even differently. To avoid
overcomplicating the code, this commit refactors the usage of this
structure in the following ways:

1. struct mlx5e_xdp_info is split into two different structures. One is
struct mlx5e_xdp_xmit_data, a transient structure that doesn't need to
be stored and is only used while sending the packet. The other is still
struct mlx5e_xdp_info that is stored in a FIFO and contains the fields
needed on completion.

2. The fields of struct mlx5e_xdp_info that are used in different flows
are put into a union. A special enum indicates the cleanup mode and
helps choose the right union member. This approach is clear and
explicit. Although it could be possible to "guess" the mode by looking
at the values of the fields and at the XDP SQ type, it wouldn't be that
clear and extendable and would require looking through the whole chain
to understand what's going on.

For the reference, there are the fields of struct mlx5e_xdp_info that
are used in different flows (including AF_XDP ones):

Packet origin          | Fields used on completion | Cleanup steps
-----------------------+---------------------------+------------------
XDP_REDIRECT,          | xdpf, dma_addr            | DMA unmap and
XDP_TX from XSK RQ     |                           | xdp_return_frame.
-----------------------+---------------------------+------------------
XDP_TX from regular RQ | di                        | Recycle page.
-----------------------+---------------------------+------------------
AF_XDP TX              | (none)                    | Increment the
                       |                           | producer index in
                       |                           | Completion Ring.

On send, the same set of mlx5e_xdp_xmit_data fields is used in all
flows: DMA and virtual addresses and length.

Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Acked-by: Saeed Mahameed <saeedm@mellanox.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/en.h  | 46 +++++++++--
 .../net/ethernet/mellanox/mlx5/core/en/xdp.c  | 81 ++++++++++++-------
 .../net/ethernet/mellanox/mlx5/core/en/xdp.h  | 11 ++-
 3 files changed, 97 insertions(+), 41 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
index 41e22763007c..cdb73568a344 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
@@ -402,10 +402,44 @@ struct mlx5e_dma_info {
 	dma_addr_t      addr;
 };
 
+/* XDP packets can be transmitted in different ways. On completion, we need to
+ * distinguish between them to clean up things in a proper way.
+ */
+enum mlx5e_xdp_xmit_mode {
+	/* An xdp_frame was transmitted due to either XDP_REDIRECT from another
+	 * device or XDP_TX from an XSK RQ. The frame has to be unmapped and
+	 * returned.
+	 */
+	MLX5E_XDP_XMIT_MODE_FRAME,
+
+	/* The xdp_frame was created in place as a result of XDP_TX from a
+	 * regular RQ. No DMA remapping happened, and the page belongs to us.
+	 */
+	MLX5E_XDP_XMIT_MODE_PAGE,
+
+	/* No xdp_frame was created at all, the transmit happened from a UMEM
+	 * page. The UMEM Completion Ring producer pointer has to be increased.
+	 */
+	MLX5E_XDP_XMIT_MODE_XSK,
+};
+
 struct mlx5e_xdp_info {
-	struct xdp_frame      *xdpf;
-	dma_addr_t            dma_addr;
-	struct mlx5e_dma_info di;
+	enum mlx5e_xdp_xmit_mode mode;
+	union {
+		struct {
+			struct xdp_frame *xdpf;
+			dma_addr_t dma_addr;
+		} frame;
+		struct {
+			struct mlx5e_dma_info di;
+		} page;
+	};
+};
+
+struct mlx5e_xdp_xmit_data {
+	dma_addr_t  dma_addr;
+	void       *data;
+	u32         len;
 };
 
 struct mlx5e_xdp_info_fifo {
@@ -431,8 +465,10 @@ struct mlx5e_xdp_mpwqe {
 };
 
 struct mlx5e_xdpsq;
-typedef bool (*mlx5e_fp_xmit_xdp_frame)(struct mlx5e_xdpsq*,
-					struct mlx5e_xdp_info*);
+typedef bool (*mlx5e_fp_xmit_xdp_frame)(struct mlx5e_xdpsq *,
+					struct mlx5e_xdp_xmit_data *,
+					struct mlx5e_xdp_info *);
+
 struct mlx5e_xdpsq {
 	/* data path */
 
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
index 5a900b70b203..89f6eb1109cf 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
@@ -57,17 +57,27 @@ static inline bool
 mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_dma_info *di,
 		    struct xdp_buff *xdp)
 {
+	struct mlx5e_xdp_xmit_data xdptxd;
 	struct mlx5e_xdp_info xdpi;
+	struct xdp_frame *xdpf;
+	dma_addr_t dma_addr;
 
-	xdpi.xdpf = convert_to_xdp_frame(xdp);
-	if (unlikely(!xdpi.xdpf))
+	xdpf = convert_to_xdp_frame(xdp);
+	if (unlikely(!xdpf))
 		return false;
-	xdpi.dma_addr = di->addr + (xdpi.xdpf->data - (void *)xdpi.xdpf);
-	dma_sync_single_for_device(sq->pdev, xdpi.dma_addr,
-				   xdpi.xdpf->len, DMA_TO_DEVICE);
-	xdpi.di = *di;
 
-	return sq->xmit_xdp_frame(sq, &xdpi);
+	xdptxd.data = xdpf->data;
+	xdptxd.len  = xdpf->len;
+
+	xdpi.mode = MLX5E_XDP_XMIT_MODE_PAGE;
+
+	dma_addr = di->addr + (xdpf->data - (void *)xdpf);
+	dma_sync_single_for_device(sq->pdev, dma_addr, xdptxd.len, DMA_TO_DEVICE);
+
+	xdptxd.dma_addr = dma_addr;
+	xdpi.page.di = *di;
+
+	return sq->xmit_xdp_frame(sq, &xdptxd, &xdpi);
 }
 
 /* returns true if packet was consumed by xdp */
@@ -184,14 +194,13 @@ static void mlx5e_xdp_mpwqe_complete(struct mlx5e_xdpsq *sq)
 }
 
 static bool mlx5e_xmit_xdp_frame_mpwqe(struct mlx5e_xdpsq *sq,
+				       struct mlx5e_xdp_xmit_data *xdptxd,
 				       struct mlx5e_xdp_info *xdpi)
 {
 	struct mlx5e_xdp_mpwqe *session = &sq->mpwqe;
 	struct mlx5e_xdpsq_stats *stats = sq->stats;
 
-	struct xdp_frame *xdpf = xdpi->xdpf;
-
-	if (unlikely(sq->hw_mtu < xdpf->len)) {
+	if (unlikely(xdptxd->len > sq->hw_mtu)) {
 		stats->err++;
 		return false;
 	}
@@ -208,7 +217,7 @@ static bool mlx5e_xmit_xdp_frame_mpwqe(struct mlx5e_xdpsq *sq,
 		mlx5e_xdp_mpwqe_session_start(sq);
 	}
 
-	mlx5e_xdp_mpwqe_add_dseg(sq, xdpi, stats);
+	mlx5e_xdp_mpwqe_add_dseg(sq, xdptxd, stats);
 
 	if (unlikely(session->complete ||
 		     session->ds_count == session->max_ds_count))
@@ -219,7 +228,9 @@ static bool mlx5e_xmit_xdp_frame_mpwqe(struct mlx5e_xdpsq *sq,
 	return true;
 }
 
-static bool mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xdp_info *xdpi)
+static bool mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq,
+				 struct mlx5e_xdp_xmit_data *xdptxd,
+				 struct mlx5e_xdp_info *xdpi)
 {
 	struct mlx5_wq_cyc       *wq   = &sq->wq;
 	u16                       pi   = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
@@ -229,9 +240,8 @@ static bool mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xdp_info *
 	struct mlx5_wqe_eth_seg  *eseg = &wqe->eth;
 	struct mlx5_wqe_data_seg *dseg = wqe->data;
 
-	struct xdp_frame *xdpf = xdpi->xdpf;
-	dma_addr_t dma_addr  = xdpi->dma_addr;
-	unsigned int dma_len = xdpf->len;
+	dma_addr_t dma_addr = xdptxd->dma_addr;
+	u32 dma_len = xdptxd->len;
 
 	struct mlx5e_xdpsq_stats *stats = sq->stats;
 
@@ -253,7 +263,7 @@ static bool mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xdp_info *
 
 	/* copy the inline part if required */
 	if (sq->min_inline_mode != MLX5_INLINE_MODE_NONE) {
-		memcpy(eseg->inline_hdr.start, xdpf->data, MLX5E_XDP_MIN_INLINE);
+		memcpy(eseg->inline_hdr.start, xdptxd->data, MLX5E_XDP_MIN_INLINE);
 		eseg->inline_hdr.sz = cpu_to_be16(MLX5E_XDP_MIN_INLINE);
 		dma_len  -= MLX5E_XDP_MIN_INLINE;
 		dma_addr += MLX5E_XDP_MIN_INLINE;
@@ -286,14 +296,19 @@ static void mlx5e_free_xdpsq_desc(struct mlx5e_xdpsq *sq,
 	for (i = 0; i < wi->num_pkts; i++) {
 		struct mlx5e_xdp_info xdpi = mlx5e_xdpi_fifo_pop(xdpi_fifo);
 
-		if (rq) {
-			/* XDP_TX */
-			mlx5e_page_release(rq, &xdpi.di, recycle);
-		} else {
+		switch (xdpi.mode) {
+		case MLX5E_XDP_XMIT_MODE_FRAME:
 			/* XDP_REDIRECT */
-			dma_unmap_single(sq->pdev, xdpi.dma_addr,
-					 xdpi.xdpf->len, DMA_TO_DEVICE);
-			xdp_return_frame(xdpi.xdpf);
+			dma_unmap_single(sq->pdev, xdpi.frame.dma_addr,
+					 xdpi.frame.xdpf->len, DMA_TO_DEVICE);
+			xdp_return_frame(xdpi.frame.xdpf);
+			break;
+		case MLX5E_XDP_XMIT_MODE_PAGE:
+			/* XDP_TX */
+			mlx5e_page_release(rq, &xdpi.page.di, recycle);
+			break;
+		default:
+			WARN_ON_ONCE(true);
 		}
 	}
 }
@@ -398,21 +413,27 @@ int mlx5e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,
 
 	for (i = 0; i < n; i++) {
 		struct xdp_frame *xdpf = frames[i];
+		struct mlx5e_xdp_xmit_data xdptxd;
 		struct mlx5e_xdp_info xdpi;
 
-		xdpi.dma_addr = dma_map_single(sq->pdev, xdpf->data, xdpf->len,
-					       DMA_TO_DEVICE);
-		if (unlikely(dma_mapping_error(sq->pdev, xdpi.dma_addr))) {
+		xdptxd.data = xdpf->data;
+		xdptxd.len = xdpf->len;
+		xdptxd.dma_addr = dma_map_single(sq->pdev, xdptxd.data,
+						 xdptxd.len, DMA_TO_DEVICE);
+
+		if (unlikely(dma_mapping_error(sq->pdev, xdptxd.dma_addr))) {
 			xdp_return_frame_rx_napi(xdpf);
 			drops++;
 			continue;
 		}
 
-		xdpi.xdpf = xdpf;
+		xdpi.mode           = MLX5E_XDP_XMIT_MODE_FRAME;
+		xdpi.frame.xdpf     = xdpf;
+		xdpi.frame.dma_addr = xdptxd.dma_addr;
 
-		if (unlikely(!sq->xmit_xdp_frame(sq, &xdpi))) {
-			dma_unmap_single(sq->pdev, xdpi.dma_addr,
-					 xdpf->len, DMA_TO_DEVICE);
+		if (unlikely(!sq->xmit_xdp_frame(sq, &xdptxd, &xdpi))) {
+			dma_unmap_single(sq->pdev, xdptxd.dma_addr,
+					 xdptxd.len, DMA_TO_DEVICE);
 			xdp_return_frame_rx_napi(xdpf);
 			drops++;
 		}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h
index 8b537a4b0840..2a5158993349 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h
@@ -97,15 +97,14 @@ static inline void mlx5e_xdp_update_inline_state(struct mlx5e_xdpsq *sq)
 }
 
 static inline void
-mlx5e_xdp_mpwqe_add_dseg(struct mlx5e_xdpsq *sq, struct mlx5e_xdp_info *xdpi,
+mlx5e_xdp_mpwqe_add_dseg(struct mlx5e_xdpsq *sq,
+			 struct mlx5e_xdp_xmit_data *xdptxd,
 			 struct mlx5e_xdpsq_stats *stats)
 {
 	struct mlx5e_xdp_mpwqe *session = &sq->mpwqe;
-	dma_addr_t dma_addr    = xdpi->dma_addr;
-	struct xdp_frame *xdpf = xdpi->xdpf;
 	struct mlx5_wqe_data_seg *dseg =
 		(struct mlx5_wqe_data_seg *)session->wqe + session->ds_count;
-	u16 dma_len = xdpf->len;
+	u32 dma_len = xdptxd->len;
 
 	session->pkt_count++;
 
@@ -124,7 +123,7 @@ mlx5e_xdp_mpwqe_add_dseg(struct mlx5e_xdpsq *sq, struct mlx5e_xdp_info *xdpi,
 		}
 
 		inline_dseg->byte_count = cpu_to_be32(dma_len | MLX5_INLINE_SEG);
-		memcpy(inline_dseg->data, xdpf->data, dma_len);
+		memcpy(inline_dseg->data, xdptxd->data, dma_len);
 
 		session->ds_count += ds_cnt;
 		stats->inlnw++;
@@ -132,7 +131,7 @@ mlx5e_xdp_mpwqe_add_dseg(struct mlx5e_xdpsq *sq, struct mlx5e_xdp_info *xdpi,
 	}
 
 no_inline:
-	dseg->addr       = cpu_to_be64(dma_addr);
+	dseg->addr       = cpu_to_be64(xdptxd->dma_addr);
 	dseg->byte_count = cpu_to_be32(dma_len);
 	dseg->lkey       = sq->mkey_be;
 	session->ds_count++;
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH bpf-next v4 12/17] net/mlx5e: Share the XDP SQ for XDP_TX between RQs
  2019-06-12 15:56 [PATCH bpf-next v4 00/17] AF_XDP infrastructure improvements and mlx5e support Maxim Mikityanskiy
                   ` (10 preceding siblings ...)
  2019-06-12 15:56 ` [PATCH bpf-next v4 11/17] net/mlx5e: Refactor struct mlx5e_xdp_info Maxim Mikityanskiy
@ 2019-06-12 15:56 ` Maxim Mikityanskiy
  2019-06-12 15:57 ` [PATCH bpf-next v4 13/17] net/mlx5e: XDP_TX from UMEM support Maxim Mikityanskiy
                   ` (6 subsequent siblings)
  18 siblings, 0 replies; 48+ messages in thread
From: Maxim Mikityanskiy @ 2019-06-12 15:56 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Björn Töpel,
	Magnus Karlsson
  Cc: bpf, netdev, David S. Miller, Saeed Mahameed, Jonathan Lemon,
	Tariq Toukan, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jakub Kicinski, Maciej Fijalkowski, Maxim Mikityanskiy

Put the XDP SQ that is used for XDP_TX into the channel. It used to be a
part of the RQ, but with introduction of AF_XDP there will be one more
RQ that could share the same XDP SQ. This patch is a preparation for
that change.

Separate XDP_TX statistics per RQ were implemented in one of the previous
patches.

Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Acked-by: Saeed Mahameed <saeedm@mellanox.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/en.h  |  4 ++-
 .../net/ethernet/mellanox/mlx5/core/en/xdp.c  | 20 +++++++-------
 .../net/ethernet/mellanox/mlx5/core/en/xdp.h  |  4 +--
 .../net/ethernet/mellanox/mlx5/core/en_main.c | 26 +++++++++++--------
 .../net/ethernet/mellanox/mlx5/core/en_txrx.c |  4 +--
 5 files changed, 32 insertions(+), 26 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
index cdb73568a344..8cb28e5604f0 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
@@ -431,6 +431,7 @@ struct mlx5e_xdp_info {
 			dma_addr_t dma_addr;
 		} frame;
 		struct {
+			struct mlx5e_rq *rq;
 			struct mlx5e_dma_info di;
 		} page;
 	};
@@ -643,7 +644,7 @@ struct mlx5e_rq {
 
 	/* XDP */
 	struct bpf_prog       *xdp_prog;
-	struct mlx5e_xdpsq     xdpsq;
+	struct mlx5e_xdpsq    *xdpsq;
 	DECLARE_BITMAP(flags, 8);
 	struct page_pool      *page_pool;
 
@@ -662,6 +663,7 @@ struct mlx5e_rq {
 struct mlx5e_channel {
 	/* data path */
 	struct mlx5e_rq            rq;
+	struct mlx5e_xdpsq         rq_xdpsq;
 	struct mlx5e_txqsq         sq[MLX5E_MAX_NUM_TC];
 	struct mlx5e_icosq         icosq;   /* internal control operations */
 	bool                       xdp;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
index 89f6eb1109cf..b3e118fc4521 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
@@ -54,8 +54,8 @@ int mlx5e_xdp_max_mtu(struct mlx5e_params *params)
 }
 
 static inline bool
-mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_dma_info *di,
-		    struct xdp_buff *xdp)
+mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq,
+		    struct mlx5e_dma_info *di, struct xdp_buff *xdp)
 {
 	struct mlx5e_xdp_xmit_data xdptxd;
 	struct mlx5e_xdp_info xdpi;
@@ -75,6 +75,7 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_dma_info *di,
 	dma_sync_single_for_device(sq->pdev, dma_addr, xdptxd.len, DMA_TO_DEVICE);
 
 	xdptxd.dma_addr = dma_addr;
+	xdpi.page.rq = rq;
 	xdpi.page.di = *di;
 
 	return sq->xmit_xdp_frame(sq, &xdptxd, &xdpi);
@@ -105,7 +106,7 @@ bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct mlx5e_dma_info *di,
 		*len = xdp.data_end - xdp.data;
 		return false;
 	case XDP_TX:
-		if (unlikely(!mlx5e_xmit_xdp_buff(&rq->xdpsq, di, &xdp)))
+		if (unlikely(!mlx5e_xmit_xdp_buff(rq->xdpsq, rq, di, &xdp)))
 			goto xdp_abort;
 		__set_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags); /* non-atomic */
 		return true;
@@ -287,7 +288,6 @@ static bool mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq,
 
 static void mlx5e_free_xdpsq_desc(struct mlx5e_xdpsq *sq,
 				  struct mlx5e_xdp_wqe_info *wi,
-				  struct mlx5e_rq *rq,
 				  bool recycle)
 {
 	struct mlx5e_xdp_info_fifo *xdpi_fifo = &sq->db.xdpi_fifo;
@@ -305,7 +305,7 @@ static void mlx5e_free_xdpsq_desc(struct mlx5e_xdpsq *sq,
 			break;
 		case MLX5E_XDP_XMIT_MODE_PAGE:
 			/* XDP_TX */
-			mlx5e_page_release(rq, &xdpi.page.di, recycle);
+			mlx5e_page_release(xdpi.page.rq, &xdpi.page.di, recycle);
 			break;
 		default:
 			WARN_ON_ONCE(true);
@@ -313,7 +313,7 @@ static void mlx5e_free_xdpsq_desc(struct mlx5e_xdpsq *sq,
 	}
 }
 
-bool mlx5e_poll_xdpsq_cq(struct mlx5e_cq *cq, struct mlx5e_rq *rq)
+bool mlx5e_poll_xdpsq_cq(struct mlx5e_cq *cq)
 {
 	struct mlx5e_xdpsq *sq;
 	struct mlx5_cqe64 *cqe;
@@ -358,7 +358,7 @@ bool mlx5e_poll_xdpsq_cq(struct mlx5e_cq *cq, struct mlx5e_rq *rq)
 
 			sqcc += wi->num_wqebbs;
 
-			mlx5e_free_xdpsq_desc(sq, wi, rq, true);
+			mlx5e_free_xdpsq_desc(sq, wi, true);
 		} while (!last_wqe);
 	} while ((++i < MLX5E_TX_CQ_POLL_BUDGET) && (cqe = mlx5_cqwq_get_cqe(&cq->wq)));
 
@@ -373,7 +373,7 @@ bool mlx5e_poll_xdpsq_cq(struct mlx5e_cq *cq, struct mlx5e_rq *rq)
 	return (i == MLX5E_TX_CQ_POLL_BUDGET);
 }
 
-void mlx5e_free_xdpsq_descs(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq)
+void mlx5e_free_xdpsq_descs(struct mlx5e_xdpsq *sq)
 {
 	while (sq->cc != sq->pc) {
 		struct mlx5e_xdp_wqe_info *wi;
@@ -384,7 +384,7 @@ void mlx5e_free_xdpsq_descs(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq)
 
 		sq->cc += wi->num_wqebbs;
 
-		mlx5e_free_xdpsq_desc(sq, wi, rq, false);
+		mlx5e_free_xdpsq_desc(sq, wi, false);
 	}
 }
 
@@ -450,7 +450,7 @@ int mlx5e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,
 
 void mlx5e_xdp_rx_poll_complete(struct mlx5e_rq *rq)
 {
-	struct mlx5e_xdpsq *xdpsq = &rq->xdpsq;
+	struct mlx5e_xdpsq *xdpsq = rq->xdpsq;
 
 	if (xdpsq->mpwqe.wqe)
 		mlx5e_xdp_mpwqe_complete(xdpsq);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h
index 2a5158993349..86db5ad49a42 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h
@@ -42,8 +42,8 @@
 int mlx5e_xdp_max_mtu(struct mlx5e_params *params);
 bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct mlx5e_dma_info *di,
 		      void *va, u16 *rx_headroom, u32 *len);
-bool mlx5e_poll_xdpsq_cq(struct mlx5e_cq *cq, struct mlx5e_rq *rq);
-void mlx5e_free_xdpsq_descs(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq);
+bool mlx5e_poll_xdpsq_cq(struct mlx5e_cq *cq);
+void mlx5e_free_xdpsq_descs(struct mlx5e_xdpsq *sq);
 void mlx5e_set_xmit_fp(struct mlx5e_xdpsq *sq, bool is_mpw);
 void mlx5e_xdp_rx_poll_complete(struct mlx5e_rq *rq);
 int mlx5e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index 35d9f5f9f7cf..79f684cb8f51 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -418,6 +418,7 @@ static int mlx5e_alloc_rq(struct mlx5e_channel *c,
 	rq->mdev    = mdev;
 	rq->hw_mtu  = MLX5E_SW2HW_MTU(params, params->sw_mtu);
 	rq->stats   = &c->priv->channel_stats[c->ix].rq;
+	rq->xdpsq   = &c->rq_xdpsq;
 
 	rq->xdp_prog = params->xdp_prog ? bpf_prog_inc(params->xdp_prog) : NULL;
 	if (IS_ERR(rq->xdp_prog)) {
@@ -1439,7 +1440,7 @@ static int mlx5e_open_xdpsq(struct mlx5e_channel *c,
 	return err;
 }
 
-static void mlx5e_close_xdpsq(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq)
+static void mlx5e_close_xdpsq(struct mlx5e_xdpsq *sq)
 {
 	struct mlx5e_channel *c = sq->channel;
 
@@ -1447,7 +1448,7 @@ static void mlx5e_close_xdpsq(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq)
 	napi_synchronize(&c->napi);
 
 	mlx5e_destroy_sq(c->mdev, sq->sqn);
-	mlx5e_free_xdpsq_descs(sq, rq);
+	mlx5e_free_xdpsq_descs(sq);
 	mlx5e_free_xdpsq(sq);
 }
 
@@ -1826,7 +1827,7 @@ static int mlx5e_open_channel(struct mlx5e_priv *priv, int ix,
 
 	/* XDP SQ CQ params are same as normal TXQ sq CQ params */
 	err = c->xdp ? mlx5e_open_cq(c, params->tx_cq_moderation,
-				     &cparam->tx_cq, &c->rq.xdpsq.cq) : 0;
+				     &cparam->tx_cq, &c->rq_xdpsq.cq) : 0;
 	if (err)
 		goto err_close_rx_cq;
 
@@ -1840,9 +1841,12 @@ static int mlx5e_open_channel(struct mlx5e_priv *priv, int ix,
 	if (err)
 		goto err_close_icosq;
 
-	err = c->xdp ? mlx5e_open_xdpsq(c, params, &cparam->xdp_sq, &c->rq.xdpsq, false) : 0;
-	if (err)
-		goto err_close_sqs;
+	if (c->xdp) {
+		err = mlx5e_open_xdpsq(c, params, &cparam->xdp_sq,
+				       &c->rq_xdpsq, false);
+		if (err)
+			goto err_close_sqs;
+	}
 
 	err = mlx5e_open_rq(c, params, &cparam->rq, &c->rq);
 	if (err)
@@ -1861,7 +1865,7 @@ static int mlx5e_open_channel(struct mlx5e_priv *priv, int ix,
 
 err_close_xdp_sq:
 	if (c->xdp)
-		mlx5e_close_xdpsq(&c->rq.xdpsq, &c->rq);
+		mlx5e_close_xdpsq(&c->rq_xdpsq);
 
 err_close_sqs:
 	mlx5e_close_sqs(c);
@@ -1872,7 +1876,7 @@ static int mlx5e_open_channel(struct mlx5e_priv *priv, int ix,
 err_disable_napi:
 	napi_disable(&c->napi);
 	if (c->xdp)
-		mlx5e_close_cq(&c->rq.xdpsq.cq);
+		mlx5e_close_cq(&c->rq_xdpsq.cq);
 
 err_close_rx_cq:
 	mlx5e_close_cq(&c->rq.cq);
@@ -1917,15 +1921,15 @@ static void mlx5e_deactivate_channel(struct mlx5e_channel *c)
 
 static void mlx5e_close_channel(struct mlx5e_channel *c)
 {
-	mlx5e_close_xdpsq(&c->xdpsq, NULL);
+	mlx5e_close_xdpsq(&c->xdpsq);
 	mlx5e_close_rq(&c->rq);
 	if (c->xdp)
-		mlx5e_close_xdpsq(&c->rq.xdpsq, &c->rq);
+		mlx5e_close_xdpsq(&c->rq_xdpsq);
 	mlx5e_close_sqs(c);
 	mlx5e_close_icosq(&c->icosq);
 	napi_disable(&c->napi);
 	if (c->xdp)
-		mlx5e_close_cq(&c->rq.xdpsq.cq);
+		mlx5e_close_cq(&c->rq_xdpsq.cq);
 	mlx5e_close_cq(&c->rq.cq);
 	mlx5e_close_cq(&c->xdpsq.cq);
 	mlx5e_close_tx_cqs(c);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c
index de4d5ae431af..d2b8ce5df59c 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c
@@ -97,10 +97,10 @@ int mlx5e_napi_poll(struct napi_struct *napi, int budget)
 	for (i = 0; i < c->num_tc; i++)
 		busy |= mlx5e_poll_tx_cq(&c->sq[i].cq, budget);
 
-	busy |= mlx5e_poll_xdpsq_cq(&c->xdpsq.cq, NULL);
+	busy |= mlx5e_poll_xdpsq_cq(&c->xdpsq.cq);
 
 	if (c->xdp)
-		busy |= mlx5e_poll_xdpsq_cq(&rq->xdpsq.cq, rq);
+		busy |= mlx5e_poll_xdpsq_cq(&c->rq_xdpsq.cq);
 
 	if (likely(budget)) { /* budget=0 means: don't poll rx rings */
 		work_done = mlx5e_poll_rx_cq(&rq->cq, budget);
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH bpf-next v4 13/17] net/mlx5e: XDP_TX from UMEM support
  2019-06-12 15:56 [PATCH bpf-next v4 00/17] AF_XDP infrastructure improvements and mlx5e support Maxim Mikityanskiy
                   ` (11 preceding siblings ...)
  2019-06-12 15:56 ` [PATCH bpf-next v4 12/17] net/mlx5e: Share the XDP SQ for XDP_TX between RQs Maxim Mikityanskiy
@ 2019-06-12 15:57 ` Maxim Mikityanskiy
  2019-06-12 15:57 ` [PATCH bpf-next v4 14/17] net/mlx5e: Consider XSK in XDP MTU limit calculation Maxim Mikityanskiy
                   ` (5 subsequent siblings)
  18 siblings, 0 replies; 48+ messages in thread
From: Maxim Mikityanskiy @ 2019-06-12 15:57 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Björn Töpel,
	Magnus Karlsson
  Cc: bpf, netdev, David S. Miller, Saeed Mahameed, Jonathan Lemon,
	Tariq Toukan, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jakub Kicinski, Maciej Fijalkowski, Maxim Mikityanskiy

When an XDP program returns XDP_TX, and the RQ is XSK-enabled, it
requires careful handling, because convert_to_xdp_frame creates a new
page and copies the data there, while our driver expects the xdp_frame
to point to the same memory as the xdp_buff. Handle this case
separately: map the page, and in the end unmap it and call
xdp_return_frame.

Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Acked-by: Saeed Mahameed <saeedm@mellanox.com>
---
 .../net/ethernet/mellanox/mlx5/core/en/xdp.c  | 50 ++++++++++++++++---
 1 file changed, 42 insertions(+), 8 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
index b3e118fc4521..1364bdff702c 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
@@ -69,14 +69,48 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq,
 	xdptxd.data = xdpf->data;
 	xdptxd.len  = xdpf->len;
 
-	xdpi.mode = MLX5E_XDP_XMIT_MODE_PAGE;
+	if (xdp->rxq->mem.type == MEM_TYPE_ZERO_COPY) {
+		/* The xdp_buff was in the UMEM and was copied into a newly
+		 * allocated page. The UMEM page was returned via the ZCA, and
+		 * this new page has to be mapped at this point and has to be
+		 * unmapped and returned via xdp_return_frame on completion.
+		 */
+
+		/* Prevent double recycling of the UMEM page. Even in case this
+		 * function returns false, the xdp_buff shouldn't be recycled,
+		 * as it was already done in xdp_convert_zc_to_xdp_frame.
+		 */
+		__set_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags); /* non-atomic */
+
+		xdpi.mode = MLX5E_XDP_XMIT_MODE_FRAME;
 
-	dma_addr = di->addr + (xdpf->data - (void *)xdpf);
-	dma_sync_single_for_device(sq->pdev, dma_addr, xdptxd.len, DMA_TO_DEVICE);
+		dma_addr = dma_map_single(sq->pdev, xdptxd.data, xdptxd.len,
+					  DMA_TO_DEVICE);
+		if (dma_mapping_error(sq->pdev, dma_addr)) {
+			xdp_return_frame(xdpf);
+			return false;
+		}
 
-	xdptxd.dma_addr = dma_addr;
-	xdpi.page.rq = rq;
-	xdpi.page.di = *di;
+		xdptxd.dma_addr     = dma_addr;
+		xdpi.frame.xdpf     = xdpf;
+		xdpi.frame.dma_addr = dma_addr;
+	} else {
+		/* Driver assumes that convert_to_xdp_frame returns an xdp_frame
+		 * that points to the same memory region as the original
+		 * xdp_buff. It allows to map the memory only once and to use
+		 * the DMA_BIDIRECTIONAL mode.
+		 */
+
+		xdpi.mode = MLX5E_XDP_XMIT_MODE_PAGE;
+
+		dma_addr = di->addr + (xdpf->data - (void *)xdpf);
+		dma_sync_single_for_device(sq->pdev, dma_addr, xdptxd.len,
+					   DMA_TO_DEVICE);
+
+		xdptxd.dma_addr = dma_addr;
+		xdpi.page.rq    = rq;
+		xdpi.page.di    = *di;
+	}
 
 	return sq->xmit_xdp_frame(sq, &xdptxd, &xdpi);
 }
@@ -298,13 +332,13 @@ static void mlx5e_free_xdpsq_desc(struct mlx5e_xdpsq *sq,
 
 		switch (xdpi.mode) {
 		case MLX5E_XDP_XMIT_MODE_FRAME:
-			/* XDP_REDIRECT */
+			/* XDP_TX from the XSK RQ and XDP_REDIRECT */
 			dma_unmap_single(sq->pdev, xdpi.frame.dma_addr,
 					 xdpi.frame.xdpf->len, DMA_TO_DEVICE);
 			xdp_return_frame(xdpi.frame.xdpf);
 			break;
 		case MLX5E_XDP_XMIT_MODE_PAGE:
-			/* XDP_TX */
+			/* XDP_TX from the regular RQ */
 			mlx5e_page_release(xdpi.page.rq, &xdpi.page.di, recycle);
 			break;
 		default:
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH bpf-next v4 14/17] net/mlx5e: Consider XSK in XDP MTU limit calculation
  2019-06-12 15:56 [PATCH bpf-next v4 00/17] AF_XDP infrastructure improvements and mlx5e support Maxim Mikityanskiy
                   ` (12 preceding siblings ...)
  2019-06-12 15:57 ` [PATCH bpf-next v4 13/17] net/mlx5e: XDP_TX from UMEM support Maxim Mikityanskiy
@ 2019-06-12 15:57 ` Maxim Mikityanskiy
  2019-06-12 15:57 ` [PATCH bpf-next v4 15/17] net/mlx5e: Encapsulate open/close queues into a function Maxim Mikityanskiy
                   ` (4 subsequent siblings)
  18 siblings, 0 replies; 48+ messages in thread
From: Maxim Mikityanskiy @ 2019-06-12 15:57 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Björn Töpel,
	Magnus Karlsson
  Cc: bpf, netdev, David S. Miller, Saeed Mahameed, Jonathan Lemon,
	Tariq Toukan, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jakub Kicinski, Maciej Fijalkowski, Maxim Mikityanskiy

Use the existing mlx5e_get_linear_rq_headroom function to calculate the
headroom for mlx5e_xdp_max_mtu. This function takes the XSK headroom
into consideration, which will be used in the following patches.

Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Acked-by: Saeed Mahameed <saeedm@mellanox.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/en/params.c | 4 ++--
 drivers/net/ethernet/mellanox/mlx5/core/en/params.h | 2 ++
 drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c    | 5 +++--
 drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h    | 3 ++-
 drivers/net/ethernet/mellanox/mlx5/core/en_main.c   | 4 ++--
 5 files changed, 11 insertions(+), 7 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
index 50a458dc3836..0de908b12fcc 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
@@ -9,8 +9,8 @@ static inline bool mlx5e_rx_is_xdp(struct mlx5e_params *params,
 	return params->xdp_prog || xsk;
 }
 
-static inline u16 mlx5e_get_linear_rq_headroom(struct mlx5e_params *params,
-					       struct mlx5e_xsk_param *xsk)
+u16 mlx5e_get_linear_rq_headroom(struct mlx5e_params *params,
+				 struct mlx5e_xsk_param *xsk)
 {
 	u16 headroom = NET_IP_ALIGN;
 
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.h b/drivers/net/ethernet/mellanox/mlx5/core/en/params.h
index ed420f3efe52..7f29b82dd8c2 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.h
@@ -11,6 +11,8 @@ struct mlx5e_xsk_param {
 	u16 chunk_size;
 };
 
+u16 mlx5e_get_linear_rq_headroom(struct mlx5e_params *params,
+				 struct mlx5e_xsk_param *xsk);
 u32 mlx5e_rx_get_linear_frag_sz(struct mlx5e_params *params,
 				struct mlx5e_xsk_param *xsk);
 u8 mlx5e_mpwqe_log_pkts_per_wqe(struct mlx5e_params *params);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
index 1364bdff702c..ee99efde9143 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
@@ -32,10 +32,11 @@
 
 #include <linux/bpf_trace.h>
 #include "en/xdp.h"
+#include "en/params.h"
 
-int mlx5e_xdp_max_mtu(struct mlx5e_params *params)
+int mlx5e_xdp_max_mtu(struct mlx5e_params *params, struct mlx5e_xsk_param *xsk)
 {
-	int hr = NET_IP_ALIGN + XDP_PACKET_HEADROOM;
+	int hr = mlx5e_get_linear_rq_headroom(params, xsk);
 
 	/* Let S := SKB_DATA_ALIGN(sizeof(struct skb_shared_info)).
 	 * The condition checked in mlx5e_rx_is_linear_skb is:
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h
index 86db5ad49a42..9200cb9f499b 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h
@@ -39,7 +39,8 @@
 	(sizeof(struct mlx5e_tx_wqe) / MLX5_SEND_WQE_DS)
 #define MLX5E_XDP_TX_DS_COUNT (MLX5E_XDP_TX_EMPTY_DS_COUNT + 1 /* SG DS */)
 
-int mlx5e_xdp_max_mtu(struct mlx5e_params *params);
+struct mlx5e_xsk_param;
+int mlx5e_xdp_max_mtu(struct mlx5e_params *params, struct mlx5e_xsk_param *xsk);
 bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct mlx5e_dma_info *di,
 		      void *va, u16 *rx_headroom, u32 *len);
 bool mlx5e_poll_xdpsq_cq(struct mlx5e_cq *cq);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index 79f684cb8f51..44557ecd4d34 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -3724,7 +3724,7 @@ int mlx5e_change_mtu(struct net_device *netdev, int new_mtu,
 	if (params->xdp_prog &&
 	    !mlx5e_rx_is_linear_skb(&new_channels.params)) {
 		netdev_err(netdev, "MTU(%d) > %d is not allowed while XDP enabled\n",
-			   new_mtu, mlx5e_xdp_max_mtu(params));
+			   new_mtu, mlx5e_xdp_max_mtu(params, NULL));
 		err = -EINVAL;
 		goto out;
 	}
@@ -4169,7 +4169,7 @@ static int mlx5e_xdp_allowed(struct mlx5e_priv *priv, struct bpf_prog *prog)
 	if (!mlx5e_rx_is_linear_skb(&new_channels.params)) {
 		netdev_warn(netdev, "XDP is not allowed with MTU(%d) > %d\n",
 			    new_channels.params.sw_mtu,
-			    mlx5e_xdp_max_mtu(&new_channels.params));
+			    mlx5e_xdp_max_mtu(&new_channels.params, NULL));
 		return -EINVAL;
 	}
 
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH bpf-next v4 15/17] net/mlx5e: Encapsulate open/close queues into a function
  2019-06-12 15:56 [PATCH bpf-next v4 00/17] AF_XDP infrastructure improvements and mlx5e support Maxim Mikityanskiy
                   ` (13 preceding siblings ...)
  2019-06-12 15:57 ` [PATCH bpf-next v4 14/17] net/mlx5e: Consider XSK in XDP MTU limit calculation Maxim Mikityanskiy
@ 2019-06-12 15:57 ` Maxim Mikityanskiy
  2019-06-12 15:57 ` [PATCH bpf-next v4 16/17] net/mlx5e: Move queue param structs to en/params.h Maxim Mikityanskiy
                   ` (3 subsequent siblings)
  18 siblings, 0 replies; 48+ messages in thread
From: Maxim Mikityanskiy @ 2019-06-12 15:57 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Björn Töpel,
	Magnus Karlsson
  Cc: bpf, netdev, David S. Miller, Saeed Mahameed, Jonathan Lemon,
	Tariq Toukan, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jakub Kicinski, Maciej Fijalkowski, Maxim Mikityanskiy

Create new functions mlx5e_{open,close}_queues to encapsulate opening
and closing RQs and SQs, and call the new functions from
mlx5e_{open,close}_channel. It simplifies the existing functions a bit
and prepares them for the upcoming AF_XDP changes.

Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Acked-by: Saeed Mahameed <saeedm@mellanox.com>
---
 .../net/ethernet/mellanox/mlx5/core/en_main.c | 125 ++++++++++--------
 1 file changed, 73 insertions(+), 52 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index 44557ecd4d34..ae1cf425ee4e 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -1769,49 +1769,16 @@ static void mlx5e_free_xps_cpumask(struct mlx5e_channel *c)
 	free_cpumask_var(c->xps_cpumask);
 }
 
-static int mlx5e_open_channel(struct mlx5e_priv *priv, int ix,
-			      struct mlx5e_params *params,
-			      struct mlx5e_channel_param *cparam,
-			      struct mlx5e_channel **cp)
+static int mlx5e_open_queues(struct mlx5e_channel *c,
+			     struct mlx5e_params *params,
+			     struct mlx5e_channel_param *cparam)
 {
-	int cpu = cpumask_first(mlx5_comp_irq_get_affinity_mask(priv->mdev, ix));
 	struct net_dim_cq_moder icocq_moder = {0, 0};
-	struct net_device *netdev = priv->netdev;
-	struct mlx5e_channel *c;
-	unsigned int irq;
 	int err;
-	int eqn;
-
-	err = mlx5_vector2eqn(priv->mdev, ix, &eqn, &irq);
-	if (err)
-		return err;
-
-	c = kvzalloc_node(sizeof(*c), GFP_KERNEL, cpu_to_node(cpu));
-	if (!c)
-		return -ENOMEM;
-
-	c->priv     = priv;
-	c->mdev     = priv->mdev;
-	c->tstamp   = &priv->tstamp;
-	c->ix       = ix;
-	c->cpu      = cpu;
-	c->pdev     = priv->mdev->device;
-	c->netdev   = priv->netdev;
-	c->mkey_be  = cpu_to_be32(priv->mdev->mlx5e_res.mkey.key);
-	c->num_tc   = params->num_tc;
-	c->xdp      = !!params->xdp_prog;
-	c->stats    = &priv->channel_stats[ix].ch;
-	c->irq_desc = irq_to_desc(irq);
-
-	err = mlx5e_alloc_xps_cpumask(c, params);
-	if (err)
-		goto err_free_channel;
-
-	netif_napi_add(netdev, &c->napi, mlx5e_napi_poll, 64);
 
 	err = mlx5e_open_cq(c, icocq_moder, &cparam->icosq_cq, &c->icosq.cq);
 	if (err)
-		goto err_napi_del;
+		return err;
 
 	err = mlx5e_open_tx_cqs(c, params, cparam);
 	if (err)
@@ -1856,8 +1823,6 @@ static int mlx5e_open_channel(struct mlx5e_priv *priv, int ix,
 	if (err)
 		goto err_close_rq;
 
-	*cp = c;
-
 	return 0;
 
 err_close_rq:
@@ -1875,6 +1840,7 @@ static int mlx5e_open_channel(struct mlx5e_priv *priv, int ix,
 
 err_disable_napi:
 	napi_disable(&c->napi);
+
 	if (c->xdp)
 		mlx5e_close_cq(&c->rq_xdpsq.cq);
 
@@ -1890,6 +1856,73 @@ static int mlx5e_open_channel(struct mlx5e_priv *priv, int ix,
 err_close_icosq_cq:
 	mlx5e_close_cq(&c->icosq.cq);
 
+	return err;
+}
+
+static void mlx5e_close_queues(struct mlx5e_channel *c)
+{
+	mlx5e_close_xdpsq(&c->xdpsq);
+	mlx5e_close_rq(&c->rq);
+	if (c->xdp)
+		mlx5e_close_xdpsq(&c->rq_xdpsq);
+	mlx5e_close_sqs(c);
+	mlx5e_close_icosq(&c->icosq);
+	napi_disable(&c->napi);
+	if (c->xdp)
+		mlx5e_close_cq(&c->rq_xdpsq.cq);
+	mlx5e_close_cq(&c->rq.cq);
+	mlx5e_close_cq(&c->xdpsq.cq);
+	mlx5e_close_tx_cqs(c);
+	mlx5e_close_cq(&c->icosq.cq);
+}
+
+static int mlx5e_open_channel(struct mlx5e_priv *priv, int ix,
+			      struct mlx5e_params *params,
+			      struct mlx5e_channel_param *cparam,
+			      struct mlx5e_channel **cp)
+{
+	int cpu = cpumask_first(mlx5_comp_irq_get_affinity_mask(priv->mdev, ix));
+	struct net_device *netdev = priv->netdev;
+	struct mlx5e_channel *c;
+	unsigned int irq;
+	int err;
+	int eqn;
+
+	err = mlx5_vector2eqn(priv->mdev, ix, &eqn, &irq);
+	if (err)
+		return err;
+
+	c = kvzalloc_node(sizeof(*c), GFP_KERNEL, cpu_to_node(cpu));
+	if (!c)
+		return -ENOMEM;
+
+	c->priv     = priv;
+	c->mdev     = priv->mdev;
+	c->tstamp   = &priv->tstamp;
+	c->ix       = ix;
+	c->cpu      = cpu;
+	c->pdev     = priv->mdev->device;
+	c->netdev   = priv->netdev;
+	c->mkey_be  = cpu_to_be32(priv->mdev->mlx5e_res.mkey.key);
+	c->num_tc   = params->num_tc;
+	c->xdp      = !!params->xdp_prog;
+	c->stats    = &priv->channel_stats[ix].ch;
+	c->irq_desc = irq_to_desc(irq);
+
+	err = mlx5e_alloc_xps_cpumask(c, params);
+	if (err)
+		goto err_free_channel;
+
+	netif_napi_add(netdev, &c->napi, mlx5e_napi_poll, 64);
+
+	err = mlx5e_open_queues(c, params, cparam);
+	if (unlikely(err))
+		goto err_napi_del;
+
+	*cp = c;
+
+	return 0;
+
 err_napi_del:
 	netif_napi_del(&c->napi);
 	mlx5e_free_xps_cpumask(c);
@@ -1921,19 +1954,7 @@ static void mlx5e_deactivate_channel(struct mlx5e_channel *c)
 
 static void mlx5e_close_channel(struct mlx5e_channel *c)
 {
-	mlx5e_close_xdpsq(&c->xdpsq);
-	mlx5e_close_rq(&c->rq);
-	if (c->xdp)
-		mlx5e_close_xdpsq(&c->rq_xdpsq);
-	mlx5e_close_sqs(c);
-	mlx5e_close_icosq(&c->icosq);
-	napi_disable(&c->napi);
-	if (c->xdp)
-		mlx5e_close_cq(&c->rq_xdpsq.cq);
-	mlx5e_close_cq(&c->rq.cq);
-	mlx5e_close_cq(&c->xdpsq.cq);
-	mlx5e_close_tx_cqs(c);
-	mlx5e_close_cq(&c->icosq.cq);
+	mlx5e_close_queues(c);
 	netif_napi_del(&c->napi);
 	mlx5e_free_xps_cpumask(c);
 
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH bpf-next v4 16/17] net/mlx5e: Move queue param structs to en/params.h
  2019-06-12 15:56 [PATCH bpf-next v4 00/17] AF_XDP infrastructure improvements and mlx5e support Maxim Mikityanskiy
                   ` (14 preceding siblings ...)
  2019-06-12 15:57 ` [PATCH bpf-next v4 15/17] net/mlx5e: Encapsulate open/close queues into a function Maxim Mikityanskiy
@ 2019-06-12 15:57 ` Maxim Mikityanskiy
  2019-06-12 19:10 ` [PATCH bpf-next v4 00/17] AF_XDP infrastructure improvements and mlx5e support Jonathan Lemon
                   ` (2 subsequent siblings)
  18 siblings, 0 replies; 48+ messages in thread
From: Maxim Mikityanskiy @ 2019-06-12 15:57 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Björn Töpel,
	Magnus Karlsson
  Cc: bpf, netdev, David S. Miller, Saeed Mahameed, Jonathan Lemon,
	Tariq Toukan, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jakub Kicinski, Maciej Fijalkowski, Maxim Mikityanskiy

structs mlx5e_{rq,sq,cq,channel}_param are going to be used in the
upcoming XSK RX and TX patches. Move them to a header file to make
them accessible from other C files.

Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Acked-by: Saeed Mahameed <saeedm@mellanox.com>
---
 .../ethernet/mellanox/mlx5/core/en/params.h   | 31 +++++++++++++++++++
 .../net/ethernet/mellanox/mlx5/core/en_main.c | 29 -----------------
 2 files changed, 31 insertions(+), 29 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.h b/drivers/net/ethernet/mellanox/mlx5/core/en/params.h
index 7f29b82dd8c2..f83417b822bf 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.h
@@ -11,6 +11,37 @@ struct mlx5e_xsk_param {
 	u16 chunk_size;
 };
 
+struct mlx5e_rq_param {
+	u32                        rqc[MLX5_ST_SZ_DW(rqc)];
+	struct mlx5_wq_param       wq;
+	struct mlx5e_rq_frags_info frags_info;
+};
+
+struct mlx5e_sq_param {
+	u32                        sqc[MLX5_ST_SZ_DW(sqc)];
+	struct mlx5_wq_param       wq;
+	bool                       is_mpw;
+};
+
+struct mlx5e_cq_param {
+	u32                        cqc[MLX5_ST_SZ_DW(cqc)];
+	struct mlx5_wq_param       wq;
+	u16                        eq_ix;
+	u8                         cq_period_mode;
+};
+
+struct mlx5e_channel_param {
+	struct mlx5e_rq_param      rq;
+	struct mlx5e_sq_param      sq;
+	struct mlx5e_sq_param      xdp_sq;
+	struct mlx5e_sq_param      icosq;
+	struct mlx5e_cq_param      rx_cq;
+	struct mlx5e_cq_param      tx_cq;
+	struct mlx5e_cq_param      icosq_cq;
+};
+
+/* Parameter calculations */
+
 u16 mlx5e_get_linear_rq_headroom(struct mlx5e_params *params,
 				 struct mlx5e_xsk_param *xsk);
 u32 mlx5e_rx_get_linear_frag_sz(struct mlx5e_params *params,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index ae1cf425ee4e..bb39ec1482c9 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -57,35 +57,6 @@
 #include "en/reporter.h"
 #include "en/params.h"
 
-struct mlx5e_rq_param {
-	u32			rqc[MLX5_ST_SZ_DW(rqc)];
-	struct mlx5_wq_param	wq;
-	struct mlx5e_rq_frags_info frags_info;
-};
-
-struct mlx5e_sq_param {
-	u32                        sqc[MLX5_ST_SZ_DW(sqc)];
-	struct mlx5_wq_param       wq;
-	bool                       is_mpw;
-};
-
-struct mlx5e_cq_param {
-	u32                        cqc[MLX5_ST_SZ_DW(cqc)];
-	struct mlx5_wq_param       wq;
-	u16                        eq_ix;
-	u8                         cq_period_mode;
-};
-
-struct mlx5e_channel_param {
-	struct mlx5e_rq_param      rq;
-	struct mlx5e_sq_param      sq;
-	struct mlx5e_sq_param      xdp_sq;
-	struct mlx5e_sq_param      icosq;
-	struct mlx5e_cq_param      rx_cq;
-	struct mlx5e_cq_param      tx_cq;
-	struct mlx5e_cq_param      icosq_cq;
-};
-
 bool mlx5e_check_fragmented_striding_rq_cap(struct mlx5_core_dev *mdev)
 {
 	bool striding_rq_umr = MLX5_CAP_GEN(mdev, striding_rq) &&
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* Re: [PATCH bpf-next v4 00/17] AF_XDP infrastructure improvements and mlx5e support
  2019-06-12 15:56 [PATCH bpf-next v4 00/17] AF_XDP infrastructure improvements and mlx5e support Maxim Mikityanskiy
                   ` (15 preceding siblings ...)
  2019-06-12 15:57 ` [PATCH bpf-next v4 16/17] net/mlx5e: Move queue param structs to en/params.h Maxim Mikityanskiy
@ 2019-06-12 19:10 ` Jonathan Lemon
  2019-06-12 20:48 ` Jakub Kicinski
       [not found] ` <20190612155605.22450-18-maximmi@mellanox.com>
  18 siblings, 0 replies; 48+ messages in thread
From: Jonathan Lemon @ 2019-06-12 19:10 UTC (permalink / raw)
  To: Maxim Mikityanskiy
  Cc: Alexei Starovoitov, Daniel Borkmann, Björn Töpel,
	Magnus Karlsson, bpf, netdev, David S. Miller, Saeed Mahameed,
	Tariq Toukan, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jakub Kicinski, Maciej Fijalkowski

On 12 Jun 2019, at 8:56, Maxim Mikityanskiy wrote:

> This series contains improvements to the AF_XDP kernel infrastructure
> and AF_XDP support in mlx5e. The infrastructure improvements are
> required for mlx5e, but also some of them benefit to all drivers, and
> some can be useful for other drivers that want to implement AF_XDP.
>
> The performance testing was performed on a machine with the following
> configuration:
>
> - 24 cores of Intel Xeon E5-2620 v3 @ 2.40 GHz
> - Mellanox ConnectX-5 Ex with 100 Gbit/s link
>
> The results with retpoline disabled, single stream:
>
> txonly: 33.3 Mpps (21.5 Mpps with queue and app pinned to the same CPU)
> rxdrop: 12.2 Mpps
> l2fwd: 9.4 Mpps
>
> The results with retpoline enabled, single stream:
>
> txonly: 21.3 Mpps (14.1 Mpps with queue and app pinned to the same CPU)
> rxdrop: 9.9 Mpps
> l2fwd: 6.8 Mpps
>
> v2 changes:
>
> Added patches for mlx5e and addressed the comments for v1. Rebased for
> bpf-next.
>
> v3 changes:
>
> Rebased for the newer bpf-next, resolved conflicts in libbpf. Addressed
> Björn's comments for coding style. Fixed a bug in error handling flow in
> mlx5e_open_xsk.
>
> v4 changes:
>
> UAPI is not changed, XSK RX queues are exposed to the kernel. The lower
> half of the available amount of RX queues are regular queues, and the
> upper half are XSK RX queues. The patch "xsk: Extend channels to support
> combined XSK/non-XSK traffic" was dropped. The final patch was reworked
> accordingly.
>
> Added "net/mlx5e: Attach/detach XDP program safely", as the changes
> introduced in the XSK patch base on the stuff from this one.
>
> Added "libbpf: Support drivers with non-combined channels", which aligns
> the condition in libbpf with the condition in the kernel.
>
> Rebased over the newer bpf-next.

Very nice change for the RX queues!
For the series:

Tested-by: Jonathan Lemon <jonathan.lemon@gmail.com>

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH bpf-next v4 05/17] xsk: Change the default frame size to 4096 and allow controlling it
  2019-06-12 15:56 ` [PATCH bpf-next v4 05/17] xsk: Change the default frame size to 4096 and allow controlling it Maxim Mikityanskiy
@ 2019-06-12 20:10   ` Jakub Kicinski
  2019-06-13 14:01     ` Maxim Mikityanskiy
  0 siblings, 1 reply; 48+ messages in thread
From: Jakub Kicinski @ 2019-06-12 20:10 UTC (permalink / raw)
  To: Maxim Mikityanskiy
  Cc: Alexei Starovoitov, Daniel Borkmann, Björn Töpel,
	Magnus Karlsson, bpf, netdev, David S. Miller, Saeed Mahameed,
	Jonathan Lemon, Tariq Toukan, Martin KaFai Lau, Song Liu,
	Yonghong Song, Maciej Fijalkowski

On Wed, 12 Jun 2019 15:56:43 +0000, Maxim Mikityanskiy wrote:
> The typical XDP memory scheme is one packet per page. Change the AF_XDP
> frame size in libbpf to 4096, which is the page size on x86, to allow
> libbpf to be used with the drivers with the packet-per-page scheme.

This is slightly surprising.  Why does the driver care about the bufsz?

You're not supposed to so page operations on UMEM pages, anyway.
And the RX size filter should be configured according to MTU regardless
of XDP state.

Can you explain?

> Add a command line option -f to xdpsock to allow to specify a custom
> frame size.
> 
> Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
> Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
> Acked-by: Saeed Mahameed <saeedm@mellanox.com>

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH bpf-next v4 07/17] libbpf: Support drivers with non-combined channels
  2019-06-12 15:56 ` [PATCH bpf-next v4 07/17] libbpf: Support drivers with non-combined channels Maxim Mikityanskiy
@ 2019-06-12 20:23   ` Jakub Kicinski
  2019-06-13 12:41     ` Björn Töpel
  2019-06-13 14:01     ` Maxim Mikityanskiy
  0 siblings, 2 replies; 48+ messages in thread
From: Jakub Kicinski @ 2019-06-12 20:23 UTC (permalink / raw)
  To: Maxim Mikityanskiy
  Cc: Alexei Starovoitov, Daniel Borkmann, Björn Töpel,
	Magnus Karlsson, bpf, netdev, David S. Miller, Saeed Mahameed,
	Jonathan Lemon, Tariq Toukan, Martin KaFai Lau, Song Liu,
	Yonghong Song, Maciej Fijalkowski

On Wed, 12 Jun 2019 15:56:48 +0000, Maxim Mikityanskiy wrote:
> Currently, libbpf uses the number of combined channels as the maximum
> queue number. However, the kernel has a different limitation:
> 
> - xdp_reg_umem_at_qid() allows up to max(RX queues, TX queues).
> 
> - ethtool_set_channels() checks for UMEMs in queues up to
>   combined_count + max(rx_count, tx_count).
> 
> libbpf shouldn't limit applications to a lower max queue number. Account
> for non-combined RX and TX channels when calculating the max queue
> number. Use the same formula that is used in ethtool.
> 
> Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
> Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
> Acked-by: Saeed Mahameed <saeedm@mellanox.com>

I don't think this is correct.  max_tx tells you how many TX channels
there can be, you can't add that to combined.  Correct calculations is:

max_num_chans = max(max_combined, max(max_rx, max_tx))

>  tools/lib/bpf/xsk.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/tools/lib/bpf/xsk.c b/tools/lib/bpf/xsk.c
> index bf15a80a37c2..86107857e1f0 100644
> --- a/tools/lib/bpf/xsk.c
> +++ b/tools/lib/bpf/xsk.c
> @@ -334,13 +334,13 @@ static int xsk_get_max_queues(struct xsk_socket *xsk)
>  		goto out;
>  	}
>  
> -	if (channels.max_combined == 0 || errno == EOPNOTSUPP)
> +	ret = channels.max_combined + max(channels.max_rx, channels.max_tx);
> +
> +	if (ret == 0 || errno == EOPNOTSUPP)
>  		/* If the device says it has no channels, then all traffic
>  		 * is sent to a single stream, so max queues = 1.
>  		 */
>  		ret = 1;
> -	else
> -		ret = channels.max_combined;
>  
>  out:
>  	close(fd);


^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH bpf-next v4 00/17] AF_XDP infrastructure improvements and mlx5e support
  2019-06-12 15:56 [PATCH bpf-next v4 00/17] AF_XDP infrastructure improvements and mlx5e support Maxim Mikityanskiy
                   ` (16 preceding siblings ...)
  2019-06-12 19:10 ` [PATCH bpf-next v4 00/17] AF_XDP infrastructure improvements and mlx5e support Jonathan Lemon
@ 2019-06-12 20:48 ` Jakub Kicinski
  2019-06-13 12:58   ` Björn Töpel
  2019-06-13 14:01   ` Maxim Mikityanskiy
       [not found] ` <20190612155605.22450-18-maximmi@mellanox.com>
  18 siblings, 2 replies; 48+ messages in thread
From: Jakub Kicinski @ 2019-06-12 20:48 UTC (permalink / raw)
  To: Maxim Mikityanskiy
  Cc: Alexei Starovoitov, Daniel Borkmann, Björn Töpel,
	Magnus Karlsson, bpf, netdev, David S. Miller, Saeed Mahameed,
	Jonathan Lemon, Tariq Toukan, Martin KaFai Lau, Song Liu,
	Yonghong Song, Maciej Fijalkowski

On Wed, 12 Jun 2019 15:56:33 +0000, Maxim Mikityanskiy wrote:
> UAPI is not changed, XSK RX queues are exposed to the kernel. The lower
> half of the available amount of RX queues are regular queues, and the
> upper half are XSK RX queues. 

If I have 32 queues enabled on the NIC and I install AF_XDP socket on
queue 10, does the NIC now have 64 RQs, but only first 32 are in the
normal RSS map?

> The patch "xsk: Extend channels to support combined XSK/non-XSK
> traffic" was dropped. The final patch was reworked accordingly.

The final patches has 2k LoC, kind of hard to digest.  You can also
post the clean up patches separately, no need for large series here.

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH bpf-next v4 07/17] libbpf: Support drivers with non-combined channels
  2019-06-12 20:23   ` Jakub Kicinski
@ 2019-06-13 12:41     ` Björn Töpel
  2019-06-13 17:34       ` Jakub Kicinski
  2019-06-13 14:01     ` Maxim Mikityanskiy
  1 sibling, 1 reply; 48+ messages in thread
From: Björn Töpel @ 2019-06-13 12:41 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Maxim Mikityanskiy, Alexei Starovoitov, Daniel Borkmann,
	Björn Töpel, Magnus Karlsson, bpf, netdev,
	David S. Miller, Saeed Mahameed, Jonathan Lemon, Tariq Toukan,
	Martin KaFai Lau, Song Liu, Yonghong Song, Maciej Fijalkowski

On Wed, 12 Jun 2019 at 22:24, Jakub Kicinski
<jakub.kicinski@netronome.com> wrote:
>
> On Wed, 12 Jun 2019 15:56:48 +0000, Maxim Mikityanskiy wrote:
> > Currently, libbpf uses the number of combined channels as the maximum
> > queue number. However, the kernel has a different limitation:
> >
> > - xdp_reg_umem_at_qid() allows up to max(RX queues, TX queues).
> >
> > - ethtool_set_channels() checks for UMEMs in queues up to
> >   combined_count + max(rx_count, tx_count).
> >
> > libbpf shouldn't limit applications to a lower max queue number. Account
> > for non-combined RX and TX channels when calculating the max queue
> > number. Use the same formula that is used in ethtool.
> >
> > Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
> > Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
> > Acked-by: Saeed Mahameed <saeedm@mellanox.com>
>
> I don't think this is correct.  max_tx tells you how many TX channels
> there can be, you can't add that to combined.  Correct calculations is:
>
> max_num_chans = max(max_combined, max(max_rx, max_tx))
>

...but the inner max should be min, right?

Assuming we'd like to receive and send.

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH bpf-next v4 06/17] xsk: Return the whole xdp_desc from xsk_umem_consume_tx
  2019-06-12 15:56 ` [PATCH bpf-next v4 06/17] xsk: Return the whole xdp_desc from xsk_umem_consume_tx Maxim Mikityanskiy
@ 2019-06-13 12:48   ` Björn Töpel
  0 siblings, 0 replies; 48+ messages in thread
From: Björn Töpel @ 2019-06-13 12:48 UTC (permalink / raw)
  To: Maxim Mikityanskiy
  Cc: Alexei Starovoitov, Daniel Borkmann, Björn Töpel,
	Magnus Karlsson, bpf, netdev, David S. Miller, Saeed Mahameed,
	Jonathan Lemon, Tariq Toukan, Martin KaFai Lau, Song Liu,
	Yonghong Song, Jakub Kicinski, Maciej Fijalkowski

On Wed, 12 Jun 2019 at 20:05, Maxim Mikityanskiy <maximmi@mellanox.com> wrote:
>
> Some drivers want to access the data transmitted in order to implement
> acceleration features of the NICs. It is also useful in AF_XDP TX flow.
>
> Change the xsk_umem_consume_tx API to return the whole xdp_desc, that
> contains the data pointer, length and DMA address, instead of only the
> latter two. Adapt the implementation of i40e and ixgbe to this change.
>

Acked-by: Björn Töpel <bjorn.topel@intel.com>

> Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
> Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
> Acked-by: Saeed Mahameed <saeedm@mellanox.com>
> Cc: Björn Töpel <bjorn.topel@intel.com>
> Cc: Magnus Karlsson <magnus.karlsson@intel.com>
> ---
>  drivers/net/ethernet/intel/i40e/i40e_xsk.c   | 12 +++++++-----
>  drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c | 15 +++++++++------
>  include/net/xdp_sock.h                       |  6 +++---
>  net/xdp/xsk.c                                | 10 +++-------
>  4 files changed, 22 insertions(+), 21 deletions(-)
>
> diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.c b/drivers/net/ethernet/intel/i40e/i40e_xsk.c
> index 1b17486543ac..eae6fafad1b8 100644
> --- a/drivers/net/ethernet/intel/i40e/i40e_xsk.c
> +++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.c
> @@ -640,8 +640,8 @@ static bool i40e_xmit_zc(struct i40e_ring *xdp_ring, unsigned int budget)
>         struct i40e_tx_desc *tx_desc = NULL;
>         struct i40e_tx_buffer *tx_bi;
>         bool work_done = true;
> +       struct xdp_desc desc;
>         dma_addr_t dma;
> -       u32 len;
>
>         while (budget-- > 0) {
>                 if (!unlikely(I40E_DESC_UNUSED(xdp_ring))) {
> @@ -650,21 +650,23 @@ static bool i40e_xmit_zc(struct i40e_ring *xdp_ring, unsigned int budget)
>                         break;
>                 }
>
> -               if (!xsk_umem_consume_tx(xdp_ring->xsk_umem, &dma, &len))
> +               if (!xsk_umem_consume_tx(xdp_ring->xsk_umem, &desc))
>                         break;
>
> -               dma_sync_single_for_device(xdp_ring->dev, dma, len,
> +               dma = xdp_umem_get_dma(xdp_ring->xsk_umem, desc.addr);
> +
> +               dma_sync_single_for_device(xdp_ring->dev, dma, desc.len,
>                                            DMA_BIDIRECTIONAL);
>
>                 tx_bi = &xdp_ring->tx_bi[xdp_ring->next_to_use];
> -               tx_bi->bytecount = len;
> +               tx_bi->bytecount = desc.len;
>
>                 tx_desc = I40E_TX_DESC(xdp_ring, xdp_ring->next_to_use);
>                 tx_desc->buffer_addr = cpu_to_le64(dma);
>                 tx_desc->cmd_type_offset_bsz =
>                         build_ctob(I40E_TX_DESC_CMD_ICRC
>                                    | I40E_TX_DESC_CMD_EOP,
> -                                  0, len, 0);
> +                                  0, desc.len, 0);
>
>                 xdp_ring->next_to_use++;
>                 if (xdp_ring->next_to_use == xdp_ring->count)
> diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
> index bfe95ce0bd7f..0297a70a4e2d 100644
> --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
> +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
> @@ -621,8 +621,9 @@ static bool ixgbe_xmit_zc(struct ixgbe_ring *xdp_ring, unsigned int budget)
>         union ixgbe_adv_tx_desc *tx_desc = NULL;
>         struct ixgbe_tx_buffer *tx_bi;
>         bool work_done = true;
> -       u32 len, cmd_type;
> +       struct xdp_desc desc;
>         dma_addr_t dma;
> +       u32 cmd_type;
>
>         while (budget-- > 0) {
>                 if (unlikely(!ixgbe_desc_unused(xdp_ring)) ||
> @@ -631,14 +632,16 @@ static bool ixgbe_xmit_zc(struct ixgbe_ring *xdp_ring, unsigned int budget)
>                         break;
>                 }
>
> -               if (!xsk_umem_consume_tx(xdp_ring->xsk_umem, &dma, &len))
> +               if (!xsk_umem_consume_tx(xdp_ring->xsk_umem, &desc))
>                         break;
>
> -               dma_sync_single_for_device(xdp_ring->dev, dma, len,
> +               dma = xdp_umem_get_dma(xdp_ring->xsk_umem, desc.addr);
> +
> +               dma_sync_single_for_device(xdp_ring->dev, dma, desc.len,
>                                            DMA_BIDIRECTIONAL);
>
>                 tx_bi = &xdp_ring->tx_buffer_info[xdp_ring->next_to_use];
> -               tx_bi->bytecount = len;
> +               tx_bi->bytecount = desc.len;
>                 tx_bi->xdpf = NULL;
>
>                 tx_desc = IXGBE_TX_DESC(xdp_ring, xdp_ring->next_to_use);
> @@ -648,10 +651,10 @@ static bool ixgbe_xmit_zc(struct ixgbe_ring *xdp_ring, unsigned int budget)
>                 cmd_type = IXGBE_ADVTXD_DTYP_DATA |
>                            IXGBE_ADVTXD_DCMD_DEXT |
>                            IXGBE_ADVTXD_DCMD_IFCS;
> -               cmd_type |= len | IXGBE_TXD_CMD;
> +               cmd_type |= desc.len | IXGBE_TXD_CMD;
>                 tx_desc->read.cmd_type_len = cpu_to_le32(cmd_type);
>                 tx_desc->read.olinfo_status =
> -                       cpu_to_le32(len << IXGBE_ADVTXD_PAYLEN_SHIFT);
> +                       cpu_to_le32(desc.len << IXGBE_ADVTXD_PAYLEN_SHIFT);
>
>                 xdp_ring->next_to_use++;
>                 if (xdp_ring->next_to_use == xdp_ring->count)
> diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h
> index b6f5ebae43a1..057b159ff8b9 100644
> --- a/include/net/xdp_sock.h
> +++ b/include/net/xdp_sock.h
> @@ -81,7 +81,7 @@ bool xsk_umem_has_addrs(struct xdp_umem *umem, u32 cnt);
>  u64 *xsk_umem_peek_addr(struct xdp_umem *umem, u64 *addr);
>  void xsk_umem_discard_addr(struct xdp_umem *umem);
>  void xsk_umem_complete_tx(struct xdp_umem *umem, u32 nb_entries);
> -bool xsk_umem_consume_tx(struct xdp_umem *umem, dma_addr_t *dma, u32 *len);
> +bool xsk_umem_consume_tx(struct xdp_umem *umem, struct xdp_desc *desc);
>  void xsk_umem_consume_tx_done(struct xdp_umem *umem);
>  struct xdp_umem_fq_reuse *xsk_reuseq_prepare(u32 nentries);
>  struct xdp_umem_fq_reuse *xsk_reuseq_swap(struct xdp_umem *umem,
> @@ -175,8 +175,8 @@ static inline void xsk_umem_complete_tx(struct xdp_umem *umem, u32 nb_entries)
>  {
>  }
>
> -static inline bool xsk_umem_consume_tx(struct xdp_umem *umem, dma_addr_t *dma,
> -                                      u32 *len)
> +static inline bool xsk_umem_consume_tx(struct xdp_umem *umem,
> +                                      struct xdp_desc *desc)
>  {
>         return false;
>  }
> diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
> index 35ca531ac74e..74417a851ed5 100644
> --- a/net/xdp/xsk.c
> +++ b/net/xdp/xsk.c
> @@ -172,22 +172,18 @@ void xsk_umem_consume_tx_done(struct xdp_umem *umem)
>  }
>  EXPORT_SYMBOL(xsk_umem_consume_tx_done);
>
> -bool xsk_umem_consume_tx(struct xdp_umem *umem, dma_addr_t *dma, u32 *len)
> +bool xsk_umem_consume_tx(struct xdp_umem *umem, struct xdp_desc *desc)
>  {
> -       struct xdp_desc desc;
>         struct xdp_sock *xs;
>
>         rcu_read_lock();
>         list_for_each_entry_rcu(xs, &umem->xsk_list, list) {
> -               if (!xskq_peek_desc(xs->tx, &desc))
> +               if (!xskq_peek_desc(xs->tx, desc))
>                         continue;
>
> -               if (xskq_produce_addr_lazy(umem->cq, desc.addr))
> +               if (xskq_produce_addr_lazy(umem->cq, desc->addr))
>                         goto out;
>
> -               *dma = xdp_umem_get_dma(umem, desc.addr);
> -               *len = desc.len;
> -
>                 xskq_discard_desc(xs->tx);
>                 rcu_read_unlock();
>                 return true;
> --
> 2.19.1
>

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH bpf-next v4 02/17] xsk: Add API to check for available entries in FQ
  2019-06-12 15:56 ` [PATCH bpf-next v4 02/17] xsk: Add API to check for available entries in FQ Maxim Mikityanskiy
@ 2019-06-13 12:50   ` Björn Töpel
  0 siblings, 0 replies; 48+ messages in thread
From: Björn Töpel @ 2019-06-13 12:50 UTC (permalink / raw)
  To: Maxim Mikityanskiy
  Cc: Alexei Starovoitov, Daniel Borkmann, Björn Töpel,
	Magnus Karlsson, bpf, netdev, David S. Miller, Saeed Mahameed,
	Jonathan Lemon, Tariq Toukan, Martin KaFai Lau, Song Liu,
	Yonghong Song, Jakub Kicinski, Maciej Fijalkowski

On Wed, 12 Jun 2019 at 20:05, Maxim Mikityanskiy <maximmi@mellanox.com> wrote:
>
> Add a function that checks whether the Fill Ring has the specified
> amount of descriptors available. It will be useful for mlx5e that wants
> to check in advance, whether it can allocate a bulk of RX descriptors,
> to get the best performance.
>

Acked-by: Björn Töpel <bjorn.topel@intel.com>

> Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
> Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
> Acked-by: Saeed Mahameed <saeedm@mellanox.com>
> ---
>  include/net/xdp_sock.h | 21 +++++++++++++++++++++
>  net/xdp/xsk.c          |  6 ++++++
>  net/xdp/xsk_queue.h    | 14 ++++++++++++++
>  3 files changed, 41 insertions(+)
>
> diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h
> index ae0f368a62bb..b6f5ebae43a1 100644
> --- a/include/net/xdp_sock.h
> +++ b/include/net/xdp_sock.h
> @@ -77,6 +77,7 @@ int xsk_rcv(struct xdp_sock *xs, struct xdp_buff *xdp);
>  void xsk_flush(struct xdp_sock *xs);
>  bool xsk_is_setup_for_bpf_map(struct xdp_sock *xs);
>  /* Used from netdev driver */
> +bool xsk_umem_has_addrs(struct xdp_umem *umem, u32 cnt);
>  u64 *xsk_umem_peek_addr(struct xdp_umem *umem, u64 *addr);
>  void xsk_umem_discard_addr(struct xdp_umem *umem);
>  void xsk_umem_complete_tx(struct xdp_umem *umem, u32 nb_entries);
> @@ -99,6 +100,16 @@ static inline dma_addr_t xdp_umem_get_dma(struct xdp_umem *umem, u64 addr)
>  }
>
>  /* Reuse-queue aware version of FILL queue helpers */
> +static inline bool xsk_umem_has_addrs_rq(struct xdp_umem *umem, u32 cnt)
> +{
> +       struct xdp_umem_fq_reuse *rq = umem->fq_reuse;
> +
> +       if (rq->length >= cnt)
> +               return true;
> +
> +       return xsk_umem_has_addrs(umem, cnt - rq->length);
> +}
> +
>  static inline u64 *xsk_umem_peek_addr_rq(struct xdp_umem *umem, u64 *addr)
>  {
>         struct xdp_umem_fq_reuse *rq = umem->fq_reuse;
> @@ -146,6 +157,11 @@ static inline bool xsk_is_setup_for_bpf_map(struct xdp_sock *xs)
>         return false;
>  }
>
> +static inline bool xsk_umem_has_addrs(struct xdp_umem *umem, u32 cnt)
> +{
> +       return false;
> +}
> +
>  static inline u64 *xsk_umem_peek_addr(struct xdp_umem *umem, u64 *addr)
>  {
>         return NULL;
> @@ -200,6 +216,11 @@ static inline dma_addr_t xdp_umem_get_dma(struct xdp_umem *umem, u64 addr)
>         return 0;
>  }
>
> +static inline bool xsk_umem_has_addrs_rq(struct xdp_umem *umem, u32 cnt)
> +{
> +       return false;
> +}
> +
>  static inline u64 *xsk_umem_peek_addr_rq(struct xdp_umem *umem, u64 *addr)
>  {
>         return NULL;
> diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
> index a14e8864e4fa..b68a380f50b3 100644
> --- a/net/xdp/xsk.c
> +++ b/net/xdp/xsk.c
> @@ -37,6 +37,12 @@ bool xsk_is_setup_for_bpf_map(struct xdp_sock *xs)
>                 READ_ONCE(xs->umem->fq);
>  }
>
> +bool xsk_umem_has_addrs(struct xdp_umem *umem, u32 cnt)
> +{
> +       return xskq_has_addrs(umem->fq, cnt);
> +}
> +EXPORT_SYMBOL(xsk_umem_has_addrs);
> +
>  u64 *xsk_umem_peek_addr(struct xdp_umem *umem, u64 *addr)
>  {
>         return xskq_peek_addr(umem->fq, addr);
> diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h
> index 88b9ae24658d..12b49784a6d5 100644
> --- a/net/xdp/xsk_queue.h
> +++ b/net/xdp/xsk_queue.h
> @@ -117,6 +117,20 @@ static inline u32 xskq_nb_free(struct xsk_queue *q, u32 producer, u32 dcnt)
>         return q->nentries - (producer - q->cons_tail);
>  }
>
> +static inline bool xskq_has_addrs(struct xsk_queue *q, u32 cnt)
> +{
> +       u32 entries = q->prod_tail - q->cons_tail;
> +
> +       if (entries >= cnt)
> +               return true;
> +
> +       /* Refresh the local pointer. */
> +       q->prod_tail = READ_ONCE(q->ring->producer);
> +       entries = q->prod_tail - q->cons_tail;
> +
> +       return entries >= cnt;
> +}
> +
>  /* UMEM queue */
>
>  static inline bool xskq_is_valid_addr(struct xsk_queue *q, u64 addr)
> --
> 2.19.1
>

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH bpf-next v4 03/17] xsk: Add getsockopt XDP_OPTIONS
  2019-06-12 15:56 ` [PATCH bpf-next v4 03/17] xsk: Add getsockopt XDP_OPTIONS Maxim Mikityanskiy
@ 2019-06-13 12:50   ` Björn Töpel
  0 siblings, 0 replies; 48+ messages in thread
From: Björn Töpel @ 2019-06-13 12:50 UTC (permalink / raw)
  To: Maxim Mikityanskiy
  Cc: Alexei Starovoitov, Daniel Borkmann, Björn Töpel,
	Magnus Karlsson, bpf, netdev, David S. Miller, Saeed Mahameed,
	Jonathan Lemon, Tariq Toukan, Martin KaFai Lau, Song Liu,
	Yonghong Song, Jakub Kicinski, Maciej Fijalkowski

On Wed, 12 Jun 2019 at 20:05, Maxim Mikityanskiy <maximmi@mellanox.com> wrote:
>
> Make it possible for the application to determine whether the AF_XDP
> socket is running in zero-copy mode. To achieve this, add a new
> getsockopt option XDP_OPTIONS that returns flags. The only flag
> supported for now is the zero-copy mode indicator.
>

Acked-by: Björn Töpel <bjorn.topel@intel.com>

> Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
> Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
> Acked-by: Saeed Mahameed <saeedm@mellanox.com>
> ---
>  include/uapi/linux/if_xdp.h       |  8 ++++++++
>  net/xdp/xsk.c                     | 20 ++++++++++++++++++++
>  tools/include/uapi/linux/if_xdp.h |  8 ++++++++
>  3 files changed, 36 insertions(+)
>
> diff --git a/include/uapi/linux/if_xdp.h b/include/uapi/linux/if_xdp.h
> index caed8b1614ff..faaa5ca2a117 100644
> --- a/include/uapi/linux/if_xdp.h
> +++ b/include/uapi/linux/if_xdp.h
> @@ -46,6 +46,7 @@ struct xdp_mmap_offsets {
>  #define XDP_UMEM_FILL_RING             5
>  #define XDP_UMEM_COMPLETION_RING       6
>  #define XDP_STATISTICS                 7
> +#define XDP_OPTIONS                    8
>
>  struct xdp_umem_reg {
>         __u64 addr; /* Start of packet data area */
> @@ -60,6 +61,13 @@ struct xdp_statistics {
>         __u64 tx_invalid_descs; /* Dropped due to invalid descriptor */
>  };
>
> +struct xdp_options {
> +       __u32 flags;
> +};
> +
> +/* Flags for the flags field of struct xdp_options */
> +#define XDP_OPTIONS_ZEROCOPY (1 << 0)
> +
>  /* Pgoff for mmaping the rings */
>  #define XDP_PGOFF_RX_RING                        0
>  #define XDP_PGOFF_TX_RING               0x80000000
> diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
> index b68a380f50b3..35ca531ac74e 100644
> --- a/net/xdp/xsk.c
> +++ b/net/xdp/xsk.c
> @@ -650,6 +650,26 @@ static int xsk_getsockopt(struct socket *sock, int level, int optname,
>
>                 return 0;
>         }
> +       case XDP_OPTIONS:
> +       {
> +               struct xdp_options opts = {};
> +
> +               if (len < sizeof(opts))
> +                       return -EINVAL;
> +
> +               mutex_lock(&xs->mutex);
> +               if (xs->zc)
> +                       opts.flags |= XDP_OPTIONS_ZEROCOPY;
> +               mutex_unlock(&xs->mutex);
> +
> +               len = sizeof(opts);
> +               if (copy_to_user(optval, &opts, len))
> +                       return -EFAULT;
> +               if (put_user(len, optlen))
> +                       return -EFAULT;
> +
> +               return 0;
> +       }
>         default:
>                 break;
>         }
> diff --git a/tools/include/uapi/linux/if_xdp.h b/tools/include/uapi/linux/if_xdp.h
> index caed8b1614ff..faaa5ca2a117 100644
> --- a/tools/include/uapi/linux/if_xdp.h
> +++ b/tools/include/uapi/linux/if_xdp.h
> @@ -46,6 +46,7 @@ struct xdp_mmap_offsets {
>  #define XDP_UMEM_FILL_RING             5
>  #define XDP_UMEM_COMPLETION_RING       6
>  #define XDP_STATISTICS                 7
> +#define XDP_OPTIONS                    8
>
>  struct xdp_umem_reg {
>         __u64 addr; /* Start of packet data area */
> @@ -60,6 +61,13 @@ struct xdp_statistics {
>         __u64 tx_invalid_descs; /* Dropped due to invalid descriptor */
>  };
>
> +struct xdp_options {
> +       __u32 flags;
> +};
> +
> +/* Flags for the flags field of struct xdp_options */
> +#define XDP_OPTIONS_ZEROCOPY (1 << 0)
> +
>  /* Pgoff for mmaping the rings */
>  #define XDP_PGOFF_RX_RING                        0
>  #define XDP_PGOFF_TX_RING               0x80000000
> --
> 2.19.1
>

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH bpf-next v4 04/17] libbpf: Support getsockopt XDP_OPTIONS
  2019-06-12 15:56 ` [PATCH bpf-next v4 04/17] libbpf: Support " Maxim Mikityanskiy
@ 2019-06-13 12:51   ` Björn Töpel
  0 siblings, 0 replies; 48+ messages in thread
From: Björn Töpel @ 2019-06-13 12:51 UTC (permalink / raw)
  To: Maxim Mikityanskiy
  Cc: Alexei Starovoitov, Daniel Borkmann, Björn Töpel,
	Magnus Karlsson, bpf, netdev, David S. Miller, Saeed Mahameed,
	Jonathan Lemon, Tariq Toukan, Martin KaFai Lau, Song Liu,
	Yonghong Song, Jakub Kicinski, Maciej Fijalkowski

On Wed, 12 Jun 2019 at 20:05, Maxim Mikityanskiy <maximmi@mellanox.com> wrote:
>
> Query XDP_OPTIONS in libbpf to determine if the zero-copy mode is active
> or not.
>

Acked-by: Björn Töpel <bjorn.topel@intel.com>

> Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
> Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
> Acked-by: Saeed Mahameed <saeedm@mellanox.com>
> ---
>  tools/lib/bpf/xsk.c | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
>
> diff --git a/tools/lib/bpf/xsk.c b/tools/lib/bpf/xsk.c
> index 7ef6293b4fd7..bf15a80a37c2 100644
> --- a/tools/lib/bpf/xsk.c
> +++ b/tools/lib/bpf/xsk.c
> @@ -65,6 +65,7 @@ struct xsk_socket {
>         int xsks_map_fd;
>         __u32 queue_id;
>         char ifname[IFNAMSIZ];
> +       bool zc;
>  };
>
>  struct xsk_nl_info {
> @@ -480,6 +481,7 @@ int xsk_socket__create(struct xsk_socket **xsk_ptr, const char *ifname,
>         void *rx_map = NULL, *tx_map = NULL;
>         struct sockaddr_xdp sxdp = {};
>         struct xdp_mmap_offsets off;
> +       struct xdp_options opts;
>         struct xsk_socket *xsk;
>         socklen_t optlen;
>         int err;
> @@ -597,6 +599,16 @@ int xsk_socket__create(struct xsk_socket **xsk_ptr, const char *ifname,
>         }
>
>         xsk->prog_fd = -1;
> +
> +       optlen = sizeof(opts);
> +       err = getsockopt(xsk->fd, SOL_XDP, XDP_OPTIONS, &opts, &optlen);
> +       if (err) {
> +               err = -errno;
> +               goto out_mmap_tx;
> +       }
> +
> +       xsk->zc = opts.flags & XDP_OPTIONS_ZEROCOPY;
> +
>         if (!(xsk->config.libbpf_flags & XSK_LIBBPF_FLAGS__INHIBIT_PROG_LOAD)) {
>                 err = xsk_setup_xdp_prog(xsk);
>                 if (err)
> --
> 2.19.1
>

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH bpf-next v4 00/17] AF_XDP infrastructure improvements and mlx5e support
  2019-06-12 20:48 ` Jakub Kicinski
@ 2019-06-13 12:58   ` Björn Töpel
  2019-06-13 14:01     ` Maxim Mikityanskiy
  2019-06-13 14:01   ` Maxim Mikityanskiy
  1 sibling, 1 reply; 48+ messages in thread
From: Björn Töpel @ 2019-06-13 12:58 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Maxim Mikityanskiy, Alexei Starovoitov, Daniel Borkmann,
	Björn Töpel, Magnus Karlsson, bpf, netdev,
	David S. Miller, Saeed Mahameed, Jonathan Lemon, Tariq Toukan,
	Martin KaFai Lau, Song Liu, Yonghong Song, Maciej Fijalkowski

On Wed, 12 Jun 2019 at 22:49, Jakub Kicinski
<jakub.kicinski@netronome.com> wrote:
>
> On Wed, 12 Jun 2019 15:56:33 +0000, Maxim Mikityanskiy wrote:
> > UAPI is not changed, XSK RX queues are exposed to the kernel. The lower
> > half of the available amount of RX queues are regular queues, and the
> > upper half are XSK RX queues.
>
> If I have 32 queues enabled on the NIC and I install AF_XDP socket on
> queue 10, does the NIC now have 64 RQs, but only first 32 are in the
> normal RSS map?
>

Additional, related, question to Jakub's: Say that I'd like to hijack
all 32 Rx queues of the NIC. I create 32 AF_XDP socket and attach them
in zero-copy mode to the device. What's the result?

> > The patch "xsk: Extend channels to support combined XSK/non-XSK
> > traffic" was dropped. The final patch was reworked accordingly.
>
> The final patches has 2k LoC, kind of hard to digest.  You can also
> post the clean up patches separately, no need for large series here.

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH bpf-next v4 07/17] libbpf: Support drivers with non-combined channels
  2019-06-12 20:23   ` Jakub Kicinski
  2019-06-13 12:41     ` Björn Töpel
@ 2019-06-13 14:01     ` Maxim Mikityanskiy
  2019-06-13 14:45       ` Maciej Fijalkowski
  2019-06-13 18:09       ` Jakub Kicinski
  1 sibling, 2 replies; 48+ messages in thread
From: Maxim Mikityanskiy @ 2019-06-13 14:01 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Alexei Starovoitov, Daniel Borkmann, Björn Töpel,
	Magnus Karlsson, bpf, netdev, David S. Miller, Saeed Mahameed,
	Jonathan Lemon, Tariq Toukan, Martin KaFai Lau, Song Liu,
	Yonghong Song, Maciej Fijalkowski

On 2019-06-12 23:23, Jakub Kicinski wrote:
> On Wed, 12 Jun 2019 15:56:48 +0000, Maxim Mikityanskiy wrote:
>> Currently, libbpf uses the number of combined channels as the maximum
>> queue number. However, the kernel has a different limitation:
>>
>> - xdp_reg_umem_at_qid() allows up to max(RX queues, TX queues).
>>
>> - ethtool_set_channels() checks for UMEMs in queues up to
>>    combined_count + max(rx_count, tx_count).
>>
>> libbpf shouldn't limit applications to a lower max queue number. Account
>> for non-combined RX and TX channels when calculating the max queue
>> number. Use the same formula that is used in ethtool.
>>
>> Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
>> Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
>> Acked-by: Saeed Mahameed <saeedm@mellanox.com>
> 
> I don't think this is correct.  max_tx tells you how many TX channels
> there can be, you can't add that to combined.  Correct calculations is:
> 
> max_num_chans = max(max_combined, max(max_rx, max_tx))

First of all, I'm aligning with the formula in the kernel, which is:

     curr.combined_count + max(curr.rx_count, curr.tx_count);

(see net/core/ethtool.c, ethtool_set_channels()).

The formula in libbpf should match it.

Second, the existing drivers have either combined channels or separate 
rx and tx channels. So, for the first kind of drivers, max_tx doesn't 
tell how many TX channels there can be, it just says 0, and max_combined 
tells how many TX and RX channels are supported. As max_tx doesn't 
include max_combined (and vice versa), we should add them up.

>>   tools/lib/bpf/xsk.c | 6 +++---
>>   1 file changed, 3 insertions(+), 3 deletions(-)
>>
>> diff --git a/tools/lib/bpf/xsk.c b/tools/lib/bpf/xsk.c
>> index bf15a80a37c2..86107857e1f0 100644
>> --- a/tools/lib/bpf/xsk.c
>> +++ b/tools/lib/bpf/xsk.c
>> @@ -334,13 +334,13 @@ static int xsk_get_max_queues(struct xsk_socket *xsk)
>>   		goto out;
>>   	}
>>   
>> -	if (channels.max_combined == 0 || errno == EOPNOTSUPP)
>> +	ret = channels.max_combined + max(channels.max_rx, channels.max_tx);
>> +
>> +	if (ret == 0 || errno == EOPNOTSUPP)
>>   		/* If the device says it has no channels, then all traffic
>>   		 * is sent to a single stream, so max queues = 1.
>>   		 */
>>   		ret = 1;
>> -	else
>> -		ret = channels.max_combined;
>>   
>>   out:
>>   	close(fd);
> 


^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH bpf-next v4 05/17] xsk: Change the default frame size to 4096 and allow controlling it
  2019-06-12 20:10   ` Jakub Kicinski
@ 2019-06-13 14:01     ` Maxim Mikityanskiy
  2019-06-13 17:29       ` Jakub Kicinski
  0 siblings, 1 reply; 48+ messages in thread
From: Maxim Mikityanskiy @ 2019-06-13 14:01 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Jesper Dangaard Brouer, Alexei Starovoitov, Daniel Borkmann,
	Björn Töpel, Magnus Karlsson, bpf, netdev,
	David S. Miller, Saeed Mahameed, Jonathan Lemon, Tariq Toukan,
	Martin KaFai Lau, Song Liu, Yonghong Song, Maciej Fijalkowski

On 2019-06-12 23:10, Jakub Kicinski wrote:
> On Wed, 12 Jun 2019 15:56:43 +0000, Maxim Mikityanskiy wrote:
>> The typical XDP memory scheme is one packet per page. Change the AF_XDP
>> frame size in libbpf to 4096, which is the page size on x86, to allow
>> libbpf to be used with the drivers with the packet-per-page scheme.
> 
> This is slightly surprising.  Why does the driver care about the bufsz?

The classic XDP implementation supports only the packet-per-page scheme. 
mlx5e implements this scheme, because it perfectly fits with xdp_return 
and page pool APIs. AF_XDP relies on XDP, and even though AF_XDP doesn't 
really allocate or release pages, it works on top of XDP, and XDP 
implementation in mlx5e does allocate and release pages (in general 
case) and works with the packet-per-page scheme.

> You're not supposed to so page operations on UMEM pages, anyway.
> And the RX size filter should be configured according to MTU regardless
> of XDP state.

Yes, of course, MTU is taken into account.

> Can you explain?
> 
>> Add a command line option -f to xdpsock to allow to specify a custom
>> frame size.
>>
>> Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
>> Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
>> Acked-by: Saeed Mahameed <saeedm@mellanox.com>


^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH bpf-next v4 00/17] AF_XDP infrastructure improvements and mlx5e support
  2019-06-12 20:48 ` Jakub Kicinski
  2019-06-13 12:58   ` Björn Töpel
@ 2019-06-13 14:01   ` Maxim Mikityanskiy
  1 sibling, 0 replies; 48+ messages in thread
From: Maxim Mikityanskiy @ 2019-06-13 14:01 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Alexei Starovoitov, Daniel Borkmann, Björn Töpel,
	Magnus Karlsson, bpf, netdev, David S. Miller, Saeed Mahameed,
	Jonathan Lemon, Tariq Toukan, Martin KaFai Lau, Song Liu,
	Yonghong Song, Maciej Fijalkowski

On 2019-06-12 23:48, Jakub Kicinski wrote:
> On Wed, 12 Jun 2019 15:56:33 +0000, Maxim Mikityanskiy wrote:
>> UAPI is not changed, XSK RX queues are exposed to the kernel. The lower
>> half of the available amount of RX queues are regular queues, and the
>> upper half are XSK RX queues.
> 
> If I have 32 queues enabled on the NIC

Let's say we have 32 combined channels. In this case RX queues 0..31 are 
regular ones, and 32..63 are XSK-ZC-enabled.

> and I install AF_XDP socket on
> queue 10

It'll trigger the compatibility mode of AF_XDP (without zero copy). You 
should use queue 42, which is in the 32..63 set.

> , does the NIC now have 64 RQs, but only first 32 are in the
> normal RSS map?

Only the regular 0..31 RX queues are part of RSS.

> 
>> The patch "xsk: Extend channels to support combined XSK/non-XSK
>> traffic" was dropped. The final patch was reworked accordingly.
> 
> The final patches has 2k LoC, kind of hard to digest.  You can also
> post the clean up patches separately, no need for large series here.
> 

I used to have the final patch as three patches (add XSK stubs, add RX 
support and add TX support), but I prefer not to have this separation, 
because it doesn't look right to add empty stub functions with /* TODO: 
implement */ comments in one patch and to add the implementations 
immediately in the next patch.

Thanks for reviewing!
Max

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH bpf-next v4 00/17] AF_XDP infrastructure improvements and mlx5e support
  2019-06-13 12:58   ` Björn Töpel
@ 2019-06-13 14:01     ` Maxim Mikityanskiy
  2019-06-13 14:11       ` Björn Töpel
  0 siblings, 1 reply; 48+ messages in thread
From: Maxim Mikityanskiy @ 2019-06-13 14:01 UTC (permalink / raw)
  To: Björn Töpel, Jakub Kicinski
  Cc: Alexei Starovoitov, Daniel Borkmann, Björn Töpel,
	Magnus Karlsson, bpf, netdev, David S. Miller, Saeed Mahameed,
	Jonathan Lemon, Tariq Toukan, Martin KaFai Lau, Song Liu,
	Yonghong Song, Maciej Fijalkowski

On 2019-06-13 15:58, Björn Töpel wrote:
> On Wed, 12 Jun 2019 at 22:49, Jakub Kicinski
> <jakub.kicinski@netronome.com> wrote:
>>
>> On Wed, 12 Jun 2019 15:56:33 +0000, Maxim Mikityanskiy wrote:
>>> UAPI is not changed, XSK RX queues are exposed to the kernel. The lower
>>> half of the available amount of RX queues are regular queues, and the
>>> upper half are XSK RX queues.
>>
>> If I have 32 queues enabled on the NIC and I install AF_XDP socket on
>> queue 10, does the NIC now have 64 RQs, but only first 32 are in the
>> normal RSS map?
>>
> 
> Additional, related, question to Jakub's: Say that I'd like to hijack
> all 32 Rx queues of the NIC. I create 32 AF_XDP socket and attach them
> in zero-copy mode to the device. What's the result?

There are 32 regular RX queues (0..31) and 32 XSK RX queues (32..63). If 
you want 32 zero-copy AF_XDP sockets, you can attach them to queues 
32..63, and the regular traffic won't be affected at all.

>>> The patch "xsk: Extend channels to support combined XSK/non-XSK
>>> traffic" was dropped. The final patch was reworked accordingly.
>>
>> The final patches has 2k LoC, kind of hard to digest.  You can also
>> post the clean up patches separately, no need for large series here.


^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH bpf-next v4 00/17] AF_XDP infrastructure improvements and mlx5e support
  2019-06-13 14:01     ` Maxim Mikityanskiy
@ 2019-06-13 14:11       ` Björn Töpel
  2019-06-13 14:53         ` Björn Töpel
  0 siblings, 1 reply; 48+ messages in thread
From: Björn Töpel @ 2019-06-13 14:11 UTC (permalink / raw)
  To: Maxim Mikityanskiy, Björn Töpel, Jakub Kicinski
  Cc: Alexei Starovoitov, Daniel Borkmann, Magnus Karlsson, bpf,
	netdev, David S. Miller, Saeed Mahameed, Jonathan Lemon,
	Tariq Toukan, Martin KaFai Lau, Song Liu, Yonghong Song,
	Maciej Fijalkowski


On 2019-06-13 16:01, Maxim Mikityanskiy wrote:
> On 2019-06-13 15:58, Björn Töpel wrote:
>> On Wed, 12 Jun 2019 at 22:49, Jakub Kicinski
>> <jakub.kicinski@netronome.com> wrote:
>>>
>>> On Wed, 12 Jun 2019 15:56:33 +0000, Maxim Mikityanskiy wrote:
>>>> UAPI is not changed, XSK RX queues are exposed to the kernel. The lower
>>>> half of the available amount of RX queues are regular queues, and the
>>>> upper half are XSK RX queues.
>>>
>>> If I have 32 queues enabled on the NIC and I install AF_XDP socket on
>>> queue 10, does the NIC now have 64 RQs, but only first 32 are in the
>>> normal RSS map?
>>>
>>
>> Additional, related, question to Jakub's: Say that I'd like to hijack
>> all 32 Rx queues of the NIC. I create 32 AF_XDP socket and attach them
>> in zero-copy mode to the device. What's the result?
> 
> There are 32 regular RX queues (0..31) and 32 XSK RX queues (32..63). If
> you want 32 zero-copy AF_XDP sockets, you can attach them to queues
> 32..63, and the regular traffic won't be affected at all.
> 
Thanks for getting back! More questions!

Ok, so I cannot (with zero-copy) get the regular traffic into AF_XDP
sockets?

How does qids map? Can I only bind a zero-copy socket to qid 32..63 in
the example above?

Say that I have a a copy-mode AF_XDP socket bound to queue 2. In this
case I will receive the regular traffic from queue 2. Enabling zero-copy
for the same queue, will this give an error, or receive AF_XDP specific
traffic from queue 2+32? Or return an error, and require an explicit
bind to any of the queues 32..63?


Thanks,
Björn

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH bpf-next v4 07/17] libbpf: Support drivers with non-combined channels
  2019-06-13 14:01     ` Maxim Mikityanskiy
@ 2019-06-13 14:45       ` Maciej Fijalkowski
  2019-06-14 13:25         ` Maxim Mikityanskiy
  2019-06-13 18:09       ` Jakub Kicinski
  1 sibling, 1 reply; 48+ messages in thread
From: Maciej Fijalkowski @ 2019-06-13 14:45 UTC (permalink / raw)
  To: Maxim Mikityanskiy
  Cc: Jakub Kicinski, Alexei Starovoitov, Daniel Borkmann,
	Björn Töpel, Magnus Karlsson, bpf, netdev,
	David S. Miller, Saeed Mahameed, Jonathan Lemon, Tariq Toukan,
	Martin KaFai Lau, Song Liu, Yonghong Song

On Thu, 13 Jun 2019 14:01:39 +0000
Maxim Mikityanskiy <maximmi@mellanox.com> wrote:

> On 2019-06-12 23:23, Jakub Kicinski wrote:
> > On Wed, 12 Jun 2019 15:56:48 +0000, Maxim Mikityanskiy wrote:  
> >> Currently, libbpf uses the number of combined channels as the maximum
> >> queue number. However, the kernel has a different limitation:
> >>
> >> - xdp_reg_umem_at_qid() allows up to max(RX queues, TX queues).
> >>
> >> - ethtool_set_channels() checks for UMEMs in queues up to
> >>    combined_count + max(rx_count, tx_count).
> >>
> >> libbpf shouldn't limit applications to a lower max queue number. Account
> >> for non-combined RX and TX channels when calculating the max queue
> >> number. Use the same formula that is used in ethtool.
> >>
> >> Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
> >> Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
> >> Acked-by: Saeed Mahameed <saeedm@mellanox.com>  
> > 
> > I don't think this is correct.  max_tx tells you how many TX channels
> > there can be, you can't add that to combined.  Correct calculations is:
> > 
> > max_num_chans = max(max_combined, max(max_rx, max_tx))  
> 
> First of all, I'm aligning with the formula in the kernel, which is:
> 
>      curr.combined_count + max(curr.rx_count, curr.tx_count);
>
> (see net/core/ethtool.c, ethtool_set_channels()).
> 
> The formula in libbpf should match it.
> 
> Second, the existing drivers have either combined channels or separate 
> rx and tx channels. So, for the first kind of drivers, max_tx doesn't 
> tell how many TX channels there can be, it just says 0, and max_combined 
> tells how many TX and RX channels are supported. As max_tx doesn't 
> include max_combined (and vice versa), we should add them up.
> 
> >>   tools/lib/bpf/xsk.c | 6 +++---
> >>   1 file changed, 3 insertions(+), 3 deletions(-)
> >>
> >> diff --git a/tools/lib/bpf/xsk.c b/tools/lib/bpf/xsk.c
> >> index bf15a80a37c2..86107857e1f0 100644
> >> --- a/tools/lib/bpf/xsk.c
> >> +++ b/tools/lib/bpf/xsk.c
> >> @@ -334,13 +334,13 @@ static int xsk_get_max_queues(struct xsk_socket *xsk)
> >>   		goto out;
> >>   	}
> >>   
> >> -	if (channels.max_combined == 0 || errno == EOPNOTSUPP)
> >> +	ret = channels.max_combined + max(channels.max_rx, channels.max_tx);

So in case of 32 HW queues you'd like to get 64 entries in xskmap? Do you still
have a need for attaching the xsksocks to the RSS queues? I thought you want
them to be separated. So if I'm reading this right, [0, 31] xskmap entries
would be unused for the most of the time, no?

> >> +
> >> +	if (ret == 0 || errno == EOPNOTSUPP)
> >>   		/* If the device says it has no channels, then all traffic
> >>   		 * is sent to a single stream, so max queues = 1.
> >>   		 */
> >>   		ret = 1;
> >> -	else
> >> -		ret = channels.max_combined;
> >>   
> >>   out:
> >>   	close(fd);  
> >   
> 


^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH bpf-next v4 00/17] AF_XDP infrastructure improvements and mlx5e support
  2019-06-13 14:11       ` Björn Töpel
@ 2019-06-13 14:53         ` Björn Töpel
  0 siblings, 0 replies; 48+ messages in thread
From: Björn Töpel @ 2019-06-13 14:53 UTC (permalink / raw)
  To: Maxim Mikityanskiy
  Cc: Jakub Kicinski, Björn Töpel, Alexei Starovoitov,
	Daniel Borkmann, Magnus Karlsson, bpf, netdev, David S. Miller,
	Saeed Mahameed, Jonathan Lemon, Tariq Toukan, Martin KaFai Lau,
	Song Liu, Yonghong Song, Maciej Fijalkowski

On Thu, 13 Jun 2019 at 16:11, Björn Töpel <bjorn.topel@intel.com> wrote:
>
>
> On 2019-06-13 16:01, Maxim Mikityanskiy wrote:
> > On 2019-06-13 15:58, Björn Töpel wrote:
> >> On Wed, 12 Jun 2019 at 22:49, Jakub Kicinski
> >> <jakub.kicinski@netronome.com> wrote:
> >>>
> >>> On Wed, 12 Jun 2019 15:56:33 +0000, Maxim Mikityanskiy wrote:
> >>>> UAPI is not changed, XSK RX queues are exposed to the kernel. The lower
> >>>> half of the available amount of RX queues are regular queues, and the
> >>>> upper half are XSK RX queues.
> >>>
> >>> If I have 32 queues enabled on the NIC and I install AF_XDP socket on
> >>> queue 10, does the NIC now have 64 RQs, but only first 32 are in the
> >>> normal RSS map?
> >>>
> >>
> >> Additional, related, question to Jakub's: Say that I'd like to hijack
> >> all 32 Rx queues of the NIC. I create 32 AF_XDP socket and attach them
> >> in zero-copy mode to the device. What's the result?
> >
> > There are 32 regular RX queues (0..31) and 32 XSK RX queues (32..63). If
> > you want 32 zero-copy AF_XDP sockets, you can attach them to queues
> > 32..63, and the regular traffic won't be affected at all.
> >
> Thanks for getting back! More questions!
>
> Ok, so I cannot (with zero-copy) get the regular traffic into AF_XDP
> sockets?
>
> How does qids map? Can I only bind a zero-copy socket to qid 32..63 in
> the example above?
>
> Say that I have a a copy-mode AF_XDP socket bound to queue 2. In this
> case I will receive the regular traffic from queue 2. Enabling zero-copy
> for the same queue, will this give an error, or receive AF_XDP specific
> traffic from queue 2+32? Or return an error, and require an explicit
> bind to any of the queues 32..63?
>
>

Let me expand a bit on why I'm asking these qid questions.

It's unfortunate that vendors have different view/mapping on
"qids". For Intel, we allow to bind a zero-copy socket to all Rx
qids. For Mellanox, a certain set of qids are allowed for zero-copy
sockets.

This highlights a need for a better abstraction for queues than "some
queue id from ethtool". This will take some time, and I think that we
have to accept for now that we'll have different behavior/mapping for
zero-copy sockets on different NICs.

Let's address this need for a better queue abstraction, but that
shouldn't block this series IMO. Other than patch:

"[PATCH bpf-next v4 07/17] libbpf: Support drivers with non-combined channels"

which I'd like to see a bit more discussion on, I'm OK with this
series. I haven't been able to test it (no hardware "hint, hint"), but
I know Jonathan has been running it.

Thanks for working on this, Max!

Björn

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH bpf-next v4 05/17] xsk: Change the default frame size to 4096 and allow controlling it
  2019-06-13 14:01     ` Maxim Mikityanskiy
@ 2019-06-13 17:29       ` Jakub Kicinski
  2019-06-14 13:25         ` Maxim Mikityanskiy
  0 siblings, 1 reply; 48+ messages in thread
From: Jakub Kicinski @ 2019-06-13 17:29 UTC (permalink / raw)
  To: Maxim Mikityanskiy
  Cc: Jesper Dangaard Brouer, Alexei Starovoitov, Daniel Borkmann,
	Björn Töpel, Magnus Karlsson, bpf, netdev,
	David S. Miller, Saeed Mahameed, Jonathan Lemon, Tariq Toukan,
	Martin KaFai Lau, Song Liu, Yonghong Song, Maciej Fijalkowski

On Thu, 13 Jun 2019 14:01:39 +0000, Maxim Mikityanskiy wrote:
> On 2019-06-12 23:10, Jakub Kicinski wrote:
> > On Wed, 12 Jun 2019 15:56:43 +0000, Maxim Mikityanskiy wrote:  
> >> The typical XDP memory scheme is one packet per page. Change the AF_XDP
> >> frame size in libbpf to 4096, which is the page size on x86, to allow
> >> libbpf to be used with the drivers with the packet-per-page scheme.  
> > 
> > This is slightly surprising.  Why does the driver care about the bufsz?  
> 
> The classic XDP implementation supports only the packet-per-page scheme. 
> mlx5e implements this scheme, because it perfectly fits with xdp_return 
> and page pool APIs. AF_XDP relies on XDP, and even though AF_XDP doesn't 
> really allocate or release pages, it works on top of XDP, and XDP 
> implementation in mlx5e does allocate and release pages (in general 
> case) and works with the packet-per-page scheme.

Yes, okay, I get that.  But I still don't know what's the exact use you
have for AF_XDP buffers being 4k..  Could you point us in the code to
the place which relies on all buffers being 4k in any XDP scenario?

> > You're not supposed to so page operations on UMEM pages, anyway.
> > And the RX size filter should be configured according to MTU regardless
> > of XDP state.  
> 
> Yes, of course, MTU is taken into account.
> 
> > Can you explain?
> >   
> >> Add a command line option -f to xdpsock to allow to specify a custom
> >> frame size.
> >>
> >> Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
> >> Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
> >> Acked-by: Saeed Mahameed <saeedm@mellanox.com>  

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH bpf-next v4 07/17] libbpf: Support drivers with non-combined channels
  2019-06-13 12:41     ` Björn Töpel
@ 2019-06-13 17:34       ` Jakub Kicinski
  0 siblings, 0 replies; 48+ messages in thread
From: Jakub Kicinski @ 2019-06-13 17:34 UTC (permalink / raw)
  To: Björn Töpel
  Cc: Maxim Mikityanskiy, Alexei Starovoitov, Daniel Borkmann,
	Björn Töpel, Magnus Karlsson, bpf, netdev,
	David S. Miller, Saeed Mahameed, Jonathan Lemon, Tariq Toukan,
	Martin KaFai Lau, Song Liu, Yonghong Song, Maciej Fijalkowski

On Thu, 13 Jun 2019 14:41:30 +0200, Björn Töpel wrote:
> On Wed, 12 Jun 2019 at 22:24, Jakub Kicinski
> <jakub.kicinski@netronome.com> wrote:
> >
> > On Wed, 12 Jun 2019 15:56:48 +0000, Maxim Mikityanskiy wrote:  
> > > Currently, libbpf uses the number of combined channels as the maximum
> > > queue number. However, the kernel has a different limitation:
> > >
> > > - xdp_reg_umem_at_qid() allows up to max(RX queues, TX queues).
> > >
> > > - ethtool_set_channels() checks for UMEMs in queues up to
> > >   combined_count + max(rx_count, tx_count).
> > >
> > > libbpf shouldn't limit applications to a lower max queue number. Account
> > > for non-combined RX and TX channels when calculating the max queue
> > > number. Use the same formula that is used in ethtool.
> > >
> > > Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
> > > Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
> > > Acked-by: Saeed Mahameed <saeedm@mellanox.com>  
> >
> > I don't think this is correct.  max_tx tells you how many TX channels
> > there can be, you can't add that to combined.  Correct calculations is:
> >
> > max_num_chans = max(max_combined, max(max_rx, max_tx))
> >  
> 
> ...but the inner max should be min, right?
> 
> Assuming we'd like to receive and send.

That was my knee jerk reaction too, but I think this is only use to
size the array (I could be wrong).  In which case we need an index for
unidirectional socks, too.  Perhaps the helper could be named better if
my understanding is correct :(

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH bpf-next v4 07/17] libbpf: Support drivers with non-combined channels
  2019-06-13 14:01     ` Maxim Mikityanskiy
  2019-06-13 14:45       ` Maciej Fijalkowski
@ 2019-06-13 18:09       ` Jakub Kicinski
  2019-06-14 13:25         ` Maxim Mikityanskiy
  1 sibling, 1 reply; 48+ messages in thread
From: Jakub Kicinski @ 2019-06-13 18:09 UTC (permalink / raw)
  To: Maxim Mikityanskiy
  Cc: Alexei Starovoitov, Daniel Borkmann, Björn Töpel,
	Magnus Karlsson, bpf, netdev, David S. Miller, Saeed Mahameed,
	Jonathan Lemon, Tariq Toukan, Martin KaFai Lau, Song Liu,
	Yonghong Song, Maciej Fijalkowski

On Thu, 13 Jun 2019 14:01:39 +0000, Maxim Mikityanskiy wrote:
> On 2019-06-12 23:23, Jakub Kicinski wrote:
> > On Wed, 12 Jun 2019 15:56:48 +0000, Maxim Mikityanskiy wrote:  
> >> Currently, libbpf uses the number of combined channels as the maximum
> >> queue number. However, the kernel has a different limitation:
> >>
> >> - xdp_reg_umem_at_qid() allows up to max(RX queues, TX queues).
> >>
> >> - ethtool_set_channels() checks for UMEMs in queues up to
> >>    combined_count + max(rx_count, tx_count).
> >>
> >> libbpf shouldn't limit applications to a lower max queue number. Account
> >> for non-combined RX and TX channels when calculating the max queue
> >> number. Use the same formula that is used in ethtool.
> >>
> >> Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
> >> Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
> >> Acked-by: Saeed Mahameed <saeedm@mellanox.com>  
> > 
> > I don't think this is correct.  max_tx tells you how many TX channels
> > there can be, you can't add that to combined.  Correct calculations is:
> > 
> > max_num_chans = max(max_combined, max(max_rx, max_tx))  
> 
> First of all, I'm aligning with the formula in the kernel, which is:
> 
>      curr.combined_count + max(curr.rx_count, curr.tx_count);
> 
> (see net/core/ethtool.c, ethtool_set_channels()).

curr != max.  ethtool code you're pointing me to (and which I wrote)
uses the current allocation, not the max values.

> The formula in libbpf should match it.

The formula should be based on understanding what we're doing, 
not copying some not-really-equivalent code from somewhere :)

Combined is a basically a queue pair, RX is an RX ring with a dedicated
IRQ, and TX is a TX ring with a dedicated IRQ.  If driver supports both
combined and single purpose interrupt vectors it will most likely set

	max_rx = num_hw_rx
	max_tx = num_hw_tx
	max_combined = min(rx, tx)

Like with most ethtool APIs there are some variations to this.

> Second, the existing drivers have either combined channels or separate 
> rx and tx channels. So, for the first kind of drivers, max_tx doesn't 
> tell how many TX channels there can be, it just says 0, and max_combined 
> tells how many TX and RX channels are supported. As max_tx doesn't 
> include max_combined (and vice versa), we should add them up.

By existing drivers you mean Intel drivers which implement AF_XDP, 
and your driver?  Both Intel and MLX drivers seem to only set
max_combined.

If you mean all drivers across the kernel, then I believe the best
formula is what I gave you.

> >>   tools/lib/bpf/xsk.c | 6 +++---
> >>   1 file changed, 3 insertions(+), 3 deletions(-)
> >>
> >> diff --git a/tools/lib/bpf/xsk.c b/tools/lib/bpf/xsk.c
> >> index bf15a80a37c2..86107857e1f0 100644
> >> --- a/tools/lib/bpf/xsk.c
> >> +++ b/tools/lib/bpf/xsk.c
> >> @@ -334,13 +334,13 @@ static int xsk_get_max_queues(struct xsk_socket *xsk)
> >>   		goto out;
> >>   	}
> >>   
> >> -	if (channels.max_combined == 0 || errno == EOPNOTSUPP)
> >> +	ret = channels.max_combined + max(channels.max_rx, channels.max_tx);
> >> +
> >> +	if (ret == 0 || errno == EOPNOTSUPP)
> >>   		/* If the device says it has no channels, then all traffic
> >>   		 * is sent to a single stream, so max queues = 1.
> >>   		 */
> >>   		ret = 1;
> >> -	else
> >> -		ret = channels.max_combined;
> >>   
> >>   out:
> >>   	close(fd);  

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH bpf-next v4 07/17] libbpf: Support drivers with non-combined channels
  2019-06-13 18:09       ` Jakub Kicinski
@ 2019-06-14 13:25         ` Maxim Mikityanskiy
  2019-06-15  2:12           ` Jakub Kicinski
  0 siblings, 1 reply; 48+ messages in thread
From: Maxim Mikityanskiy @ 2019-06-14 13:25 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Alexei Starovoitov, Daniel Borkmann, Björn Töpel,
	Magnus Karlsson, bpf, netdev, David S. Miller, Saeed Mahameed,
	Jonathan Lemon, Tariq Toukan, Martin KaFai Lau, Song Liu,
	Yonghong Song, Maciej Fijalkowski

On 2019-06-13 21:09, Jakub Kicinski wrote:
> On Thu, 13 Jun 2019 14:01:39 +0000, Maxim Mikityanskiy wrote:
>> On 2019-06-12 23:23, Jakub Kicinski wrote:
>>> On Wed, 12 Jun 2019 15:56:48 +0000, Maxim Mikityanskiy wrote:
>>>> Currently, libbpf uses the number of combined channels as the maximum
>>>> queue number. However, the kernel has a different limitation:
>>>>
>>>> - xdp_reg_umem_at_qid() allows up to max(RX queues, TX queues).
>>>>
>>>> - ethtool_set_channels() checks for UMEMs in queues up to
>>>>     combined_count + max(rx_count, tx_count).
>>>>
>>>> libbpf shouldn't limit applications to a lower max queue number. Account
>>>> for non-combined RX and TX channels when calculating the max queue
>>>> number. Use the same formula that is used in ethtool.
>>>>
>>>> Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
>>>> Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
>>>> Acked-by: Saeed Mahameed <saeedm@mellanox.com>
>>>
>>> I don't think this is correct.  max_tx tells you how many TX channels
>>> there can be, you can't add that to combined.  Correct calculations is:
>>>
>>> max_num_chans = max(max_combined, max(max_rx, max_tx))
>>
>> First of all, I'm aligning with the formula in the kernel, which is:
>>
>>       curr.combined_count + max(curr.rx_count, curr.tx_count);
>>
>> (see net/core/ethtool.c, ethtool_set_channels()).
> 
> curr != max.  ethtool code you're pointing me to (and which I wrote)
> uses the current allocation, not the max values.

The ethtool code uses curr, because it wants to calculate the amount of 
queues currently in use. libbpf uses max, because it wants to calculate 
the maximum amount of queues that can be in use. That's the only 
difference, so the formula should be the same, and this calculation can 
be applied either to curr or to max.

Imagine you have configured the NIC to have the maximum supported amount 
of channels. Then your formula in ethtool.c returns some value. Exactly 
the same value should also be returned from libbpf's 
xsk_get_max_queues(). It's achieved by applying your formula directly to 
max.

>> The formula in libbpf should match it.
> 
> The formula should be based on understanding what we're doing,
> not copying some not-really-equivalent code from somewhere :)

I have understanding of the code I write.

> Combined is a basically a queue pair, RX is an RX ring with a dedicated
> IRQ, and TX is a TX ring with a dedicated IRQ.  If driver supports both
> combined and single purpose interrupt vectors it will most likely set
> 
> 	max_rx = num_hw_rx
> 	max_tx = num_hw_tx
> 	max_combined = min(rx, tx)

OK, I got your example. The driver you are talking about won't support 
setting rx_count = max_rx, tx_count = max_tx and combined_count = 
max_combined simultaneously.

However, xsk_get_max_queues has to return the maximum number of queues 
theoretically possible with this device, to create xsks_map of a 
sufficient size. Currently, it totally ignores devices without combined 
channels, so max_rx and max_tx have to be considered in the calculation. 
Next thing is that ethtool API doesn't really tell you whether the 
device can create up to max_rx RX channels, max_tx TX channels and 
max_combined combined channels simultaneously, or there are some 
additional limitations. Your example displays such a limitation, but 
it's not the only possible one, and this limitation is not even 
mandatory for all drivers. As ethtool doesn't expose the information 
about additional limitations imposed by the driver, and as it won't hurt 
if xsks_map will be bigger than necessary, my vision is that we 
shouldn't assume any limitations we are not sure about, so max_combined 
+ max(max_rx, max_tx) is the right thing to do.

> Like with most ethtool APIs there are some variations to this.
> 
>> Second, the existing drivers have either combined channels or separate
>> rx and tx channels. So, for the first kind of drivers, max_tx doesn't
>> tell how many TX channels there can be, it just says 0, and max_combined
>> tells how many TX and RX channels are supported. As max_tx doesn't
>> include max_combined (and vice versa), we should add them up.
> 
> By existing drivers you mean Intel drivers which implement AF_XDP,
> and your driver?

No, I meant all drivers, not only AF_XDP-enabled ones. I wasn't aware 
that some of them support the choice between a combined channel and a 
unidirectional channel, however, I still find my formula correct (see 
the explanation above).

> Both Intel and MLX drivers seem to only set
> max_combined.
mlx4 doesn't support combined channels, but it's out of scope of this 
patchset.

> If you mean all drivers across the kernel, then I believe the best
> formula is what I gave you.
> 
>>>>    tools/lib/bpf/xsk.c | 6 +++---
>>>>    1 file changed, 3 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/tools/lib/bpf/xsk.c b/tools/lib/bpf/xsk.c
>>>> index bf15a80a37c2..86107857e1f0 100644
>>>> --- a/tools/lib/bpf/xsk.c
>>>> +++ b/tools/lib/bpf/xsk.c
>>>> @@ -334,13 +334,13 @@ static int xsk_get_max_queues(struct xsk_socket *xsk)
>>>>    		goto out;
>>>>    	}
>>>>    
>>>> -	if (channels.max_combined == 0 || errno == EOPNOTSUPP)
>>>> +	ret = channels.max_combined + max(channels.max_rx, channels.max_tx);
>>>> +
>>>> +	if (ret == 0 || errno == EOPNOTSUPP)
>>>>    		/* If the device says it has no channels, then all traffic
>>>>    		 * is sent to a single stream, so max queues = 1.
>>>>    		 */
>>>>    		ret = 1;
>>>> -	else
>>>> -		ret = channels.max_combined;
>>>>    
>>>>    out:
>>>>    	close(fd);


^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH bpf-next v4 07/17] libbpf: Support drivers with non-combined channels
  2019-06-13 14:45       ` Maciej Fijalkowski
@ 2019-06-14 13:25         ` Maxim Mikityanskiy
  2019-06-14 17:15           ` Maciej Fijalkowski
  0 siblings, 1 reply; 48+ messages in thread
From: Maxim Mikityanskiy @ 2019-06-14 13:25 UTC (permalink / raw)
  To: Maciej Fijalkowski
  Cc: Jakub Kicinski, Alexei Starovoitov, Daniel Borkmann,
	Björn Töpel, Magnus Karlsson, bpf, netdev,
	David S. Miller, Saeed Mahameed, Jonathan Lemon, Tariq Toukan,
	Martin KaFai Lau, Song Liu, Yonghong Song

On 2019-06-13 17:45, Maciej Fijalkowski wrote:
> On Thu, 13 Jun 2019 14:01:39 +0000
> Maxim Mikityanskiy <maximmi@mellanox.com> wrote:
> 
>> On 2019-06-12 23:23, Jakub Kicinski wrote:
>>> On Wed, 12 Jun 2019 15:56:48 +0000, Maxim Mikityanskiy wrote:
>>>> Currently, libbpf uses the number of combined channels as the maximum
>>>> queue number. However, the kernel has a different limitation:
>>>>
>>>> - xdp_reg_umem_at_qid() allows up to max(RX queues, TX queues).
>>>>
>>>> - ethtool_set_channels() checks for UMEMs in queues up to
>>>>     combined_count + max(rx_count, tx_count).
>>>>
>>>> libbpf shouldn't limit applications to a lower max queue number. Account
>>>> for non-combined RX and TX channels when calculating the max queue
>>>> number. Use the same formula that is used in ethtool.
>>>>
>>>> Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
>>>> Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
>>>> Acked-by: Saeed Mahameed <saeedm@mellanox.com>
>>>
>>> I don't think this is correct.  max_tx tells you how many TX channels
>>> there can be, you can't add that to combined.  Correct calculations is:
>>>
>>> max_num_chans = max(max_combined, max(max_rx, max_tx))
>>
>> First of all, I'm aligning with the formula in the kernel, which is:
>>
>>       curr.combined_count + max(curr.rx_count, curr.tx_count);
>>
>> (see net/core/ethtool.c, ethtool_set_channels()).
>>
>> The formula in libbpf should match it.
>>
>> Second, the existing drivers have either combined channels or separate
>> rx and tx channels. So, for the first kind of drivers, max_tx doesn't
>> tell how many TX channels there can be, it just says 0, and max_combined
>> tells how many TX and RX channels are supported. As max_tx doesn't
>> include max_combined (and vice versa), we should add them up.
>>
>>>>    tools/lib/bpf/xsk.c | 6 +++---
>>>>    1 file changed, 3 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/tools/lib/bpf/xsk.c b/tools/lib/bpf/xsk.c
>>>> index bf15a80a37c2..86107857e1f0 100644
>>>> --- a/tools/lib/bpf/xsk.c
>>>> +++ b/tools/lib/bpf/xsk.c
>>>> @@ -334,13 +334,13 @@ static int xsk_get_max_queues(struct xsk_socket *xsk)
>>>>    		goto out;
>>>>    	}
>>>>    
>>>> -	if (channels.max_combined == 0 || errno == EOPNOTSUPP)
>>>> +	ret = channels.max_combined + max(channels.max_rx, channels.max_tx);
> 
> So in case of 32 HW queues you'd like to get 64 entries in xskmap?

"32 HW queues" is not quite correct. It will be 32 combined channels, 
each with one regular RX queue and one XSK RX queue (regular RX queues 
are part of RSS). In this case, I'll have 64 XSKMAP entries.

> Do you still
> have a need for attaching the xsksocks to the RSS queues?

You can attach an XSK to a regular RX queue, but not in zero-copy mode. 
The intended use is, of course, to attach XSKs to XSK RX queues in 
zero-copy mode.

> I thought you want
> them to be separated. So if I'm reading this right, [0, 31] xskmap entries
> would be unused for the most of the time, no?

This is correct, but these entries are still needed if one decides to 
run compatibility mode without zero-copy on queues 0..31.

> 
>>>> +
>>>> +	if (ret == 0 || errno == EOPNOTSUPP)
>>>>    		/* If the device says it has no channels, then all traffic
>>>>    		 * is sent to a single stream, so max queues = 1.
>>>>    		 */
>>>>    		ret = 1;
>>>> -	else
>>>> -		ret = channels.max_combined;
>>>>    
>>>>    out:
>>>>    	close(fd);
>>>    
>>
> 


^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH bpf-next v4 05/17] xsk: Change the default frame size to 4096 and allow controlling it
  2019-06-13 17:29       ` Jakub Kicinski
@ 2019-06-14 13:25         ` Maxim Mikityanskiy
  2019-06-15  1:40           ` Jakub Kicinski
  0 siblings, 1 reply; 48+ messages in thread
From: Maxim Mikityanskiy @ 2019-06-14 13:25 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Jesper Dangaard Brouer, Alexei Starovoitov, Daniel Borkmann,
	Björn Töpel, Magnus Karlsson, bpf, netdev,
	David S. Miller, Saeed Mahameed, Jonathan Lemon, Tariq Toukan,
	Martin KaFai Lau, Song Liu, Yonghong Song, Maciej Fijalkowski

On 2019-06-13 20:29, Jakub Kicinski wrote:
> On Thu, 13 Jun 2019 14:01:39 +0000, Maxim Mikityanskiy wrote:
>> On 2019-06-12 23:10, Jakub Kicinski wrote:
>>> On Wed, 12 Jun 2019 15:56:43 +0000, Maxim Mikityanskiy wrote:
>>>> The typical XDP memory scheme is one packet per page. Change the AF_XDP
>>>> frame size in libbpf to 4096, which is the page size on x86, to allow
>>>> libbpf to be used with the drivers with the packet-per-page scheme.
>>>
>>> This is slightly surprising.  Why does the driver care about the bufsz?
>>
>> The classic XDP implementation supports only the packet-per-page scheme.
>> mlx5e implements this scheme, because it perfectly fits with xdp_return
>> and page pool APIs. AF_XDP relies on XDP, and even though AF_XDP doesn't
>> really allocate or release pages, it works on top of XDP, and XDP
>> implementation in mlx5e does allocate and release pages (in general
>> case) and works with the packet-per-page scheme.
> 
> Yes, okay, I get that.  But I still don't know what's the exact use you
> have for AF_XDP buffers being 4k..  Could you point us in the code to
> the place which relies on all buffers being 4k in any XDP scenario?

1. An XDP program is set on all queues, so to support non-4k AF_XDP 
frames, we would also need to support multiple-packet-per-page XDP for 
regular queues.

2. Page allocation in mlx5e perfectly fits page-sized XDP frames. Some 
examples in the code are:

2.1. mlx5e_free_rx_mpwqe calls a generic mlx5e_page_release to release 
the pages of a MPWQE (multi-packet work queue element), which is 
implemented as xsk_umem_fq_reuse for the case of XSK. We avoid extra 
overhead by using the fact that packet == page.

2.2. mlx5e_free_xdpsq_desc performs cleanup after XDP transmits. In case 
of XDP_TX, we can free/recycle the pages without having a refcount 
overhead, by using the fact that packet == page.

>>> You're not supposed to so page operations on UMEM pages, anyway.
>>> And the RX size filter should be configured according to MTU regardless
>>> of XDP state.
>>
>> Yes, of course, MTU is taken into account.
>>
>>> Can you explain?
>>>    
>>>> Add a command line option -f to xdpsock to allow to specify a custom
>>>> frame size.
>>>>
>>>> Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
>>>> Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
>>>> Acked-by: Saeed Mahameed <saeedm@mellanox.com>


^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH bpf-next v4 07/17] libbpf: Support drivers with non-combined channels
  2019-06-14 13:25         ` Maxim Mikityanskiy
@ 2019-06-14 17:15           ` Maciej Fijalkowski
  2019-06-14 19:50             ` Björn Töpel
  2019-06-18 12:00             ` Maxim Mikityanskiy
  0 siblings, 2 replies; 48+ messages in thread
From: Maciej Fijalkowski @ 2019-06-14 17:15 UTC (permalink / raw)
  To: Maxim Mikityanskiy
  Cc: Jakub Kicinski, Alexei Starovoitov, Daniel Borkmann,
	Björn Töpel, Magnus Karlsson, bpf, netdev,
	David S. Miller, Saeed Mahameed, Jonathan Lemon, Tariq Toukan,
	Martin KaFai Lau, Song Liu, Yonghong Song

On Fri, 14 Jun 2019 13:25:24 +0000
Maxim Mikityanskiy <maximmi@mellanox.com> wrote:

> On 2019-06-13 17:45, Maciej Fijalkowski wrote:
> > On Thu, 13 Jun 2019 14:01:39 +0000
> > Maxim Mikityanskiy <maximmi@mellanox.com> wrote:
> >   
> >> On 2019-06-12 23:23, Jakub Kicinski wrote:  
> >>> On Wed, 12 Jun 2019 15:56:48 +0000, Maxim Mikityanskiy wrote:  
> >>>> Currently, libbpf uses the number of combined channels as the maximum
> >>>> queue number. However, the kernel has a different limitation:
> >>>>
> >>>> - xdp_reg_umem_at_qid() allows up to max(RX queues, TX queues).
> >>>>
> >>>> - ethtool_set_channels() checks for UMEMs in queues up to
> >>>>     combined_count + max(rx_count, tx_count).
> >>>>
> >>>> libbpf shouldn't limit applications to a lower max queue number. Account
> >>>> for non-combined RX and TX channels when calculating the max queue
> >>>> number. Use the same formula that is used in ethtool.
> >>>>
> >>>> Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
> >>>> Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
> >>>> Acked-by: Saeed Mahameed <saeedm@mellanox.com>  
> >>>
> >>> I don't think this is correct.  max_tx tells you how many TX channels
> >>> there can be, you can't add that to combined.  Correct calculations is:
> >>>
> >>> max_num_chans = max(max_combined, max(max_rx, max_tx))  
> >>
> >> First of all, I'm aligning with the formula in the kernel, which is:
> >>
> >>       curr.combined_count + max(curr.rx_count, curr.tx_count);
> >>
> >> (see net/core/ethtool.c, ethtool_set_channels()).
> >>
> >> The formula in libbpf should match it.
> >>
> >> Second, the existing drivers have either combined channels or separate
> >> rx and tx channels. So, for the first kind of drivers, max_tx doesn't
> >> tell how many TX channels there can be, it just says 0, and max_combined
> >> tells how many TX and RX channels are supported. As max_tx doesn't
> >> include max_combined (and vice versa), we should add them up.
> >>  
> >>>>    tools/lib/bpf/xsk.c | 6 +++---
> >>>>    1 file changed, 3 insertions(+), 3 deletions(-)
> >>>>
> >>>> diff --git a/tools/lib/bpf/xsk.c b/tools/lib/bpf/xsk.c
> >>>> index bf15a80a37c2..86107857e1f0 100644
> >>>> --- a/tools/lib/bpf/xsk.c
> >>>> +++ b/tools/lib/bpf/xsk.c
> >>>> @@ -334,13 +334,13 @@ static int xsk_get_max_queues(struct xsk_socket *xsk)
> >>>>    		goto out;
> >>>>    	}
> >>>>    
> >>>> -	if (channels.max_combined == 0 || errno == EOPNOTSUPP)
> >>>> +	ret = channels.max_combined + max(channels.max_rx, channels.max_tx);  
> > 
> > So in case of 32 HW queues you'd like to get 64 entries in xskmap?  
> 
> "32 HW queues" is not quite correct. It will be 32 combined channels, 
> each with one regular RX queue and one XSK RX queue (regular RX queues 
> are part of RSS). In this case, I'll have 64 XSKMAP entries.
> 
> > Do you still
> > have a need for attaching the xsksocks to the RSS queues?  
> 
> You can attach an XSK to a regular RX queue, but not in zero-copy mode. 
> The intended use is, of course, to attach XSKs to XSK RX queues in 
> zero-copy mode.
>
> > I thought you want
> > them to be separated. So if I'm reading this right, [0, 31] xskmap entries
> > would be unused for the most of the time, no?  
> 
> This is correct, but these entries are still needed if one decides to 
> run compatibility mode without zero-copy on queues 0..31.

Why would I want to run AF_XDP without ZC? The main reason for having AF_XDP
support in drivers is the zero copy, right?

Besides that, are you educating the user in some way which queue ids should be
used so there's ZC in picture? If that was already asked/answered, then sorry
about that.

> 
> >   
> >>>> +
> >>>> +	if (ret == 0 || errno == EOPNOTSUPP)
> >>>>    		/* If the device says it has no channels, then all traffic
> >>>>    		 * is sent to a single stream, so max queues = 1.
> >>>>    		 */
> >>>>    		ret = 1;
> >>>> -	else
> >>>> -		ret = channels.max_combined;
> >>>>    
> >>>>    out:
> >>>>    	close(fd);  
> >>>      
> >>  
> >   
> 


^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH bpf-next v4 07/17] libbpf: Support drivers with non-combined channels
  2019-06-14 17:15           ` Maciej Fijalkowski
@ 2019-06-14 19:50             ` Björn Töpel
  2019-06-18 12:00             ` Maxim Mikityanskiy
  1 sibling, 0 replies; 48+ messages in thread
From: Björn Töpel @ 2019-06-14 19:50 UTC (permalink / raw)
  To: Maciej Fijalkowski, Maxim Mikityanskiy
  Cc: Jakub Kicinski, Alexei Starovoitov, Daniel Borkmann,
	Magnus Karlsson, bpf, netdev, David S. Miller, Saeed Mahameed,
	Jonathan Lemon, Tariq Toukan, Martin KaFai Lau, Song Liu,
	Yonghong Song

On 2019-06-14 19:15, Maciej Fijalkowski wrote:
> Why would I want to run AF_XDP without ZC? The main reason for having AF_XDP
> support in drivers is the zero copy, right?

In general I agree with you on this point. Short-term, I see copy-mode
useful for API adoption reasons (as XDP_SKB), so from that perspecitve
it's important. Longer term I'd like to explore AF_XDP as a faster
AF_PACKET for pcap functionality.


Björn


^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH bpf-next v4 05/17] xsk: Change the default frame size to 4096 and allow controlling it
  2019-06-14 13:25         ` Maxim Mikityanskiy
@ 2019-06-15  1:40           ` Jakub Kicinski
  2019-06-18 12:00             ` Maxim Mikityanskiy
  0 siblings, 1 reply; 48+ messages in thread
From: Jakub Kicinski @ 2019-06-15  1:40 UTC (permalink / raw)
  To: Maxim Mikityanskiy
  Cc: Jesper Dangaard Brouer, Alexei Starovoitov, Daniel Borkmann,
	Björn Töpel, Magnus Karlsson, bpf, netdev,
	David S. Miller, Saeed Mahameed, Jonathan Lemon, Tariq Toukan,
	Martin KaFai Lau, Song Liu, Yonghong Song, Maciej Fijalkowski

On Fri, 14 Jun 2019 13:25:28 +0000, Maxim Mikityanskiy wrote:
> On 2019-06-13 20:29, Jakub Kicinski wrote:
> > On Thu, 13 Jun 2019 14:01:39 +0000, Maxim Mikityanskiy wrote:  
> > 
> > Yes, okay, I get that.  But I still don't know what's the exact use you
> > have for AF_XDP buffers being 4k..  Could you point us in the code to
> > the place which relies on all buffers being 4k in any XDP scenario?  

Okay, I still don't get it, but that's for explaining :)  Perhaps it
will become clearer when you resping with patch 17 split into
reviewable chunks :)

> 1. An XDP program is set on all queues, so to support non-4k AF_XDP 
> frames, we would also need to support multiple-packet-per-page XDP for 
> regular queues.

Mm.. do you have some materials of how the mlx5 DMA/RX works?  I'd think
that if you do single packet per buffer as long as all packets are
guaranteed to fit in the buffer (based on MRU) the HW shouldn't care
what the size of the buffer is.

> 2. Page allocation in mlx5e perfectly fits page-sized XDP frames. Some 
> examples in the code are:
> 
> 2.1. mlx5e_free_rx_mpwqe calls a generic mlx5e_page_release to release 
> the pages of a MPWQE (multi-packet work queue element), which is 
> implemented as xsk_umem_fq_reuse for the case of XSK. We avoid extra 
> overhead by using the fact that packet == page.
> 
> 2.2. mlx5e_free_xdpsq_desc performs cleanup after XDP transmits. In case 
> of XDP_TX, we can free/recycle the pages without having a refcount 
> overhead, by using the fact that packet == page.

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH bpf-next v4 07/17] libbpf: Support drivers with non-combined channels
  2019-06-14 13:25         ` Maxim Mikityanskiy
@ 2019-06-15  2:12           ` Jakub Kicinski
  0 siblings, 0 replies; 48+ messages in thread
From: Jakub Kicinski @ 2019-06-15  2:12 UTC (permalink / raw)
  To: Maxim Mikityanskiy
  Cc: Alexei Starovoitov, Daniel Borkmann, Björn Töpel,
	Magnus Karlsson, bpf, netdev, David S. Miller, Saeed Mahameed,
	Jonathan Lemon, Tariq Toukan, Martin KaFai Lau, Song Liu,
	Yonghong Song, Maciej Fijalkowski

On Fri, 14 Jun 2019 13:25:05 +0000, Maxim Mikityanskiy wrote:
> Imagine you have configured the NIC to have the maximum supported amount 
> of channels. Then your formula in ethtool.c returns some value. Exactly 
> the same value should also be returned from libbpf's 
> xsk_get_max_queues(). It's achieved by applying your formula directly to 
> max.

I'm just trying to limit people inventing their own interpretations 
of this API.  Broadcom for instance does something dumb with current
counts I think they return curr.combined = curr.rx, even though there
is only curr.combined rings...

You will over allocate space for all NICs with return both combined and
non-combined counts.  But that's not a huge deal, not worth arguing about.
Moving on..

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH bpf-next v4 17/17] net/mlx5e: Add XSK zero-copy support
       [not found] ` <20190612155605.22450-18-maximmi@mellanox.com>
@ 2019-06-15 15:42   ` Jakub Kicinski
  2019-06-18 12:00     ` Maxim Mikityanskiy
  0 siblings, 1 reply; 48+ messages in thread
From: Jakub Kicinski @ 2019-06-15 15:42 UTC (permalink / raw)
  To: Maxim Mikityanskiy
  Cc: Alexei Starovoitov, Daniel Borkmann, Björn Töpel,
	Magnus Karlsson, bpf, netdev, David S. Miller, Saeed Mahameed,
	Jonathan Lemon, Tariq Toukan, Martin KaFai Lau, Song Liu,
	Yonghong Song, Maciej Fijalkowski

On Wed, 12 Jun 2019 15:57:09 +0000, Maxim Mikityanskiy wrote:
> @@ -390,6 +391,12 @@ void mlx5e_ethtool_get_channels(struct mlx5e_priv *priv,
>  {
>  	ch->max_combined   = mlx5e_get_netdev_max_channels(priv->netdev);
>  	ch->combined_count = priv->channels.params.num_channels;
> +
> +	/* XSK RQs */
> +	ch->max_rx         = ch->max_combined;
> +	/* rx_count shows the number of XSK RQs up to the highest active one. */
> +	ch->rx_count       = mlx5e_xsk_first_unused_channel(&priv->channels.params,
> +							    &priv->xsk);
>  }

Ah, Maciej pointed out to me this is why you want the patch 7 to do
what it does.  This count is for stack's queues.

Nacked-by: Jakub Kicinski <jakub.kicinski@netronome.com>


^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH bpf-next v4 05/17] xsk: Change the default frame size to 4096 and allow controlling it
  2019-06-15  1:40           ` Jakub Kicinski
@ 2019-06-18 12:00             ` Maxim Mikityanskiy
  0 siblings, 0 replies; 48+ messages in thread
From: Maxim Mikityanskiy @ 2019-06-18 12:00 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Jesper Dangaard Brouer, Alexei Starovoitov, Daniel Borkmann,
	Björn Töpel, Magnus Karlsson, bpf, netdev,
	David S. Miller, Saeed Mahameed, Jonathan Lemon, Tariq Toukan,
	Martin KaFai Lau, Song Liu, Yonghong Song, Maciej Fijalkowski

On 2019-06-15 04:40, Jakub Kicinski wrote:
> On Fri, 14 Jun 2019 13:25:28 +0000, Maxim Mikityanskiy wrote:
>> On 2019-06-13 20:29, Jakub Kicinski wrote:
>>> On Thu, 13 Jun 2019 14:01:39 +0000, Maxim Mikityanskiy wrote:
>>>
>>> Yes, okay, I get that.  But I still don't know what's the exact use you
>>> have for AF_XDP buffers being 4k..  Could you point us in the code to
>>> the place which relies on all buffers being 4k in any XDP scenario?
> 
> Okay, I still don't get it, but that's for explaining :)  Perhaps it
> will become clearer when you resping with patch 17 split into
> reviewable chunks :)

I'm sorry, as I said above, I don't think splitting it is necessary or 
is a good thing to do. I used to have it separated, but I squashed them 
to shorten the series and to avoid having stupid /* TODO: implement */ 
comments in empty functions that are implemented in the next patch. 
Unsquashing them is going to take more time, which I unfortunately don't 
have as I'm flying to Netconf tomorrow and then going on vacation. So, I 
would really like to avoid it unless absolutely necessary. Moreover, it 
won't increase readability - you'll have to jump between the patches to 
see the complete implementation of a single function - it's a single 
feature, after all.

>> 1. An XDP program is set on all queues, so to support non-4k AF_XDP
>> frames, we would also need to support multiple-packet-per-page XDP for
>> regular queues.
> 
> Mm.. do you have some materials of how the mlx5 DMA/RX works?  I'd think
> that if you do single packet per buffer as long as all packets are
> guaranteed to fit in the buffer (based on MRU) the HW shouldn't care
> what the size of the buffer is.

It's not related to hardware, it helps get better performance by 
utilizing page pool in the optimal way (without having refcnt == 2 on 
pages). Maybe Tariq or Saeed could explain it more clearly.

>> 2. Page allocation in mlx5e perfectly fits page-sized XDP frames. Some
>> examples in the code are:
>>
>> 2.1. mlx5e_free_rx_mpwqe calls a generic mlx5e_page_release to release
>> the pages of a MPWQE (multi-packet work queue element), which is
>> implemented as xsk_umem_fq_reuse for the case of XSK. We avoid extra
>> overhead by using the fact that packet == page.
>>
>> 2.2. mlx5e_free_xdpsq_desc performs cleanup after XDP transmits. In case
>> of XDP_TX, we can free/recycle the pages without having a refcount
>> overhead, by using the fact that packet == page.


^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH bpf-next v4 07/17] libbpf: Support drivers with non-combined channels
  2019-06-14 17:15           ` Maciej Fijalkowski
  2019-06-14 19:50             ` Björn Töpel
@ 2019-06-18 12:00             ` Maxim Mikityanskiy
  1 sibling, 0 replies; 48+ messages in thread
From: Maxim Mikityanskiy @ 2019-06-18 12:00 UTC (permalink / raw)
  To: Maciej Fijalkowski
  Cc: Jakub Kicinski, Alexei Starovoitov, Daniel Borkmann,
	Björn Töpel, Magnus Karlsson, bpf, netdev,
	David S. Miller, Saeed Mahameed, Jonathan Lemon, Tariq Toukan,
	Martin KaFai Lau, Song Liu, Yonghong Song

On 2019-06-14 20:15, Maciej Fijalkowski wrote:
> On Fri, 14 Jun 2019 13:25:24 +0000
> Maxim Mikityanskiy <maximmi@mellanox.com> wrote:
> 
>> On 2019-06-13 17:45, Maciej Fijalkowski wrote:
>>> On Thu, 13 Jun 2019 14:01:39 +0000
>>> Maxim Mikityanskiy <maximmi@mellanox.com> wrote:
>>>    
>>>> On 2019-06-12 23:23, Jakub Kicinski wrote:
>>>>> On Wed, 12 Jun 2019 15:56:48 +0000, Maxim Mikityanskiy wrote:
>>>>>> Currently, libbpf uses the number of combined channels as the maximum
>>>>>> queue number. However, the kernel has a different limitation:
>>>>>>
>>>>>> - xdp_reg_umem_at_qid() allows up to max(RX queues, TX queues).
>>>>>>
>>>>>> - ethtool_set_channels() checks for UMEMs in queues up to
>>>>>>      combined_count + max(rx_count, tx_count).
>>>>>>
>>>>>> libbpf shouldn't limit applications to a lower max queue number. Account
>>>>>> for non-combined RX and TX channels when calculating the max queue
>>>>>> number. Use the same formula that is used in ethtool.
>>>>>>
>>>>>> Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
>>>>>> Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
>>>>>> Acked-by: Saeed Mahameed <saeedm@mellanox.com>
>>>>>
>>>>> I don't think this is correct.  max_tx tells you how many TX channels
>>>>> there can be, you can't add that to combined.  Correct calculations is:
>>>>>
>>>>> max_num_chans = max(max_combined, max(max_rx, max_tx))
>>>>
>>>> First of all, I'm aligning with the formula in the kernel, which is:
>>>>
>>>>        curr.combined_count + max(curr.rx_count, curr.tx_count);
>>>>
>>>> (see net/core/ethtool.c, ethtool_set_channels()).
>>>>
>>>> The formula in libbpf should match it.
>>>>
>>>> Second, the existing drivers have either combined channels or separate
>>>> rx and tx channels. So, for the first kind of drivers, max_tx doesn't
>>>> tell how many TX channels there can be, it just says 0, and max_combined
>>>> tells how many TX and RX channels are supported. As max_tx doesn't
>>>> include max_combined (and vice versa), we should add them up.
>>>>   
>>>>>>     tools/lib/bpf/xsk.c | 6 +++---
>>>>>>     1 file changed, 3 insertions(+), 3 deletions(-)
>>>>>>
>>>>>> diff --git a/tools/lib/bpf/xsk.c b/tools/lib/bpf/xsk.c
>>>>>> index bf15a80a37c2..86107857e1f0 100644
>>>>>> --- a/tools/lib/bpf/xsk.c
>>>>>> +++ b/tools/lib/bpf/xsk.c
>>>>>> @@ -334,13 +334,13 @@ static int xsk_get_max_queues(struct xsk_socket *xsk)
>>>>>>     		goto out;
>>>>>>     	}
>>>>>>     
>>>>>> -	if (channels.max_combined == 0 || errno == EOPNOTSUPP)
>>>>>> +	ret = channels.max_combined + max(channels.max_rx, channels.max_tx);
>>>
>>> So in case of 32 HW queues you'd like to get 64 entries in xskmap?
>>
>> "32 HW queues" is not quite correct. It will be 32 combined channels,
>> each with one regular RX queue and one XSK RX queue (regular RX queues
>> are part of RSS). In this case, I'll have 64 XSKMAP entries.
>>
>>> Do you still
>>> have a need for attaching the xsksocks to the RSS queues?
>>
>> You can attach an XSK to a regular RX queue, but not in zero-copy mode.
>> The intended use is, of course, to attach XSKs to XSK RX queues in
>> zero-copy mode.
>>
>>> I thought you want
>>> them to be separated. So if I'm reading this right, [0, 31] xskmap entries
>>> would be unused for the most of the time, no?
>>
>> This is correct, but these entries are still needed if one decides to
>> run compatibility mode without zero-copy on queues 0..31.
> 
> Why would I want to run AF_XDP without ZC? The main reason for having AF_XDP
> support in drivers is the zero copy, right?

Yes, AF_XDP is intended to be used with zero copy when the driver 
implements it. I'm not breaking the compatibility mode if I can keep it 
supported.

> Besides that, are you educating the user in some way which queue ids should be
> used so there's ZC in picture? If that was already asked/answered, then sorry
> about that.

The details about queue IDs are in the commit message for the final patch.

>>
>>>    
>>>>>> +
>>>>>> +	if (ret == 0 || errno == EOPNOTSUPP)
>>>>>>     		/* If the device says it has no channels, then all traffic
>>>>>>     		 * is sent to a single stream, so max queues = 1.
>>>>>>     		 */
>>>>>>     		ret = 1;
>>>>>> -	else
>>>>>> -		ret = channels.max_combined;
>>>>>>     
>>>>>>     out:
>>>>>>     	close(fd);
>>>>>       
>>>>   
>>>    
>>
> 


^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH bpf-next v4 17/17] net/mlx5e: Add XSK zero-copy support
  2019-06-15 15:42   ` [PATCH bpf-next v4 17/17] net/mlx5e: Add XSK zero-copy support Jakub Kicinski
@ 2019-06-18 12:00     ` Maxim Mikityanskiy
  0 siblings, 0 replies; 48+ messages in thread
From: Maxim Mikityanskiy @ 2019-06-18 12:00 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Alexei Starovoitov, Daniel Borkmann, Björn Töpel,
	Magnus Karlsson, bpf, netdev, David S. Miller, Saeed Mahameed,
	Jonathan Lemon, Tariq Toukan, Martin KaFai Lau, Song Liu,
	Yonghong Song, Maciej Fijalkowski

On 2019-06-15 18:42, Jakub Kicinski wrote:
> On Wed, 12 Jun 2019 15:57:09 +0000, Maxim Mikityanskiy wrote:
>> @@ -390,6 +391,12 @@ void mlx5e_ethtool_get_channels(struct mlx5e_priv *priv,
>>   {
>>   	ch->max_combined   = mlx5e_get_netdev_max_channels(priv->netdev);
>>   	ch->combined_count = priv->channels.params.num_channels;
>> +
>> +	/* XSK RQs */
>> +	ch->max_rx         = ch->max_combined;
>> +	/* rx_count shows the number of XSK RQs up to the highest active one. */
>> +	ch->rx_count       = mlx5e_xsk_first_unused_channel(&priv->channels.params,
>> +							    &priv->xsk);
>>   }
> 
> Ah, Maciej pointed out to me this is why you want the patch 7 to do
> what it does
You seem to be confusing cause and effect. The libbpf patch is good 
regardless of mlx5e's needs, because the current formula is incorrect, 
and I'm fixing it. Then I do the cited change in mlx5e, which perfectly 
fits the fixed formula. So, I'm not inserting some hack in libbpf just 
to make mlx5e work, I'm fixing an existing bug, and it allows me to do 
this stuff in mlx5e. It's not about "I need to use ethtool.rx in mlx5e, 
so I'm adapting libbpf to it", it's about "I see an issue in libbpf, so 
I'm fixing it, then I'm adapting mlx5e to fit the formula".

 > This count is for stack's queues.

Second, I disagree with this statement. XSK RX queues are not stack 
queues, but in i40e they are still registered as stack queues. Various 
boundary checks in the kernel use the "amount of stack queues" to check 
XSK QIDs. All the existing usage of this count in XSK code shows it's 
not for stack queues only, my usage is no different from that, so I 
don't see any issue in exposing XSK RX queues via ethtool.rx.

Anyway, I'm respinning without patch 7 and ethtool.rx.

> Nacked-by: Jakub Kicinski <jakub.kicinski@netronome.com>
> 


^ permalink raw reply	[flat|nested] 48+ messages in thread

end of thread, other threads:[~2019-06-18 12:00 UTC | newest]

Thread overview: 48+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-12 15:56 [PATCH bpf-next v4 00/17] AF_XDP infrastructure improvements and mlx5e support Maxim Mikityanskiy
2019-06-12 15:56 ` [PATCH bpf-next v4 01/17] net/mlx5e: Attach/detach XDP program safely Maxim Mikityanskiy
2019-06-12 15:56 ` [PATCH bpf-next v4 02/17] xsk: Add API to check for available entries in FQ Maxim Mikityanskiy
2019-06-13 12:50   ` Björn Töpel
2019-06-12 15:56 ` [PATCH bpf-next v4 03/17] xsk: Add getsockopt XDP_OPTIONS Maxim Mikityanskiy
2019-06-13 12:50   ` Björn Töpel
2019-06-12 15:56 ` [PATCH bpf-next v4 04/17] libbpf: Support " Maxim Mikityanskiy
2019-06-13 12:51   ` Björn Töpel
2019-06-12 15:56 ` [PATCH bpf-next v4 05/17] xsk: Change the default frame size to 4096 and allow controlling it Maxim Mikityanskiy
2019-06-12 20:10   ` Jakub Kicinski
2019-06-13 14:01     ` Maxim Mikityanskiy
2019-06-13 17:29       ` Jakub Kicinski
2019-06-14 13:25         ` Maxim Mikityanskiy
2019-06-15  1:40           ` Jakub Kicinski
2019-06-18 12:00             ` Maxim Mikityanskiy
2019-06-12 15:56 ` [PATCH bpf-next v4 06/17] xsk: Return the whole xdp_desc from xsk_umem_consume_tx Maxim Mikityanskiy
2019-06-13 12:48   ` Björn Töpel
2019-06-12 15:56 ` [PATCH bpf-next v4 07/17] libbpf: Support drivers with non-combined channels Maxim Mikityanskiy
2019-06-12 20:23   ` Jakub Kicinski
2019-06-13 12:41     ` Björn Töpel
2019-06-13 17:34       ` Jakub Kicinski
2019-06-13 14:01     ` Maxim Mikityanskiy
2019-06-13 14:45       ` Maciej Fijalkowski
2019-06-14 13:25         ` Maxim Mikityanskiy
2019-06-14 17:15           ` Maciej Fijalkowski
2019-06-14 19:50             ` Björn Töpel
2019-06-18 12:00             ` Maxim Mikityanskiy
2019-06-13 18:09       ` Jakub Kicinski
2019-06-14 13:25         ` Maxim Mikityanskiy
2019-06-15  2:12           ` Jakub Kicinski
2019-06-12 15:56 ` [PATCH bpf-next v4 08/17] net/mlx5e: Replace deprecated PCI_DMA_TODEVICE Maxim Mikityanskiy
2019-06-12 15:56 ` [PATCH bpf-next v4 09/17] net/mlx5e: Calculate linear RX frag size considering XSK Maxim Mikityanskiy
2019-06-12 15:56 ` [PATCH bpf-next v4 10/17] net/mlx5e: Allow ICO SQ to be used by multiple RQs Maxim Mikityanskiy
2019-06-12 15:56 ` [PATCH bpf-next v4 11/17] net/mlx5e: Refactor struct mlx5e_xdp_info Maxim Mikityanskiy
2019-06-12 15:56 ` [PATCH bpf-next v4 12/17] net/mlx5e: Share the XDP SQ for XDP_TX between RQs Maxim Mikityanskiy
2019-06-12 15:57 ` [PATCH bpf-next v4 13/17] net/mlx5e: XDP_TX from UMEM support Maxim Mikityanskiy
2019-06-12 15:57 ` [PATCH bpf-next v4 14/17] net/mlx5e: Consider XSK in XDP MTU limit calculation Maxim Mikityanskiy
2019-06-12 15:57 ` [PATCH bpf-next v4 15/17] net/mlx5e: Encapsulate open/close queues into a function Maxim Mikityanskiy
2019-06-12 15:57 ` [PATCH bpf-next v4 16/17] net/mlx5e: Move queue param structs to en/params.h Maxim Mikityanskiy
2019-06-12 19:10 ` [PATCH bpf-next v4 00/17] AF_XDP infrastructure improvements and mlx5e support Jonathan Lemon
2019-06-12 20:48 ` Jakub Kicinski
2019-06-13 12:58   ` Björn Töpel
2019-06-13 14:01     ` Maxim Mikityanskiy
2019-06-13 14:11       ` Björn Töpel
2019-06-13 14:53         ` Björn Töpel
2019-06-13 14:01   ` Maxim Mikityanskiy
     [not found] ` <20190612155605.22450-18-maximmi@mellanox.com>
2019-06-15 15:42   ` [PATCH bpf-next v4 17/17] net/mlx5e: Add XSK zero-copy support Jakub Kicinski
2019-06-18 12:00     ` Maxim Mikityanskiy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).