All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Yang, Zhiyong" <zhiyong.yang@intel.com>
To: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
	"thomas.monjalon@6wind.com" <thomas.monjalon@6wind.com>,
	"pmatilai@redhat.com" <pmatilai@redhat.com>,
	"Van Haaren, Harry" <harry.van.haaren@intel.com>
Subject: Re: [PATCH v3 2/2] net/vhost: add pmd xstats
Date: Wed, 21 Sep 2016 07:22:53 +0000	[thread overview]
Message-ID: <E182254E98A5DA4EB1E657AC7CB9BD2A3EB0B972@BGSMSX104.gar.corp.intel.com> (raw)
In-Reply-To: <20160920105638.GT23158@yliu-dev.sh.intel.com>

Hi, yuanhan:

I will fix your comments in V4 patch.

Thanks
--Zhiyong
> -----Original Message-----
> From: Yuanhan Liu [mailto:yuanhan.liu@linux.intel.com]
> Sent: Tuesday, September 20, 2016 6:57 PM
> To: Yang, Zhiyong <zhiyong.yang@intel.com>
> Cc: dev@dpdk.org; thomas.monjalon@6wind.com; pmatilai@redhat.com;
> Van Haaren, Harry <harry.van.haaren@intel.com>
> Subject: Re: [PATCH v3 2/2] net/vhost: add pmd xstats
> 
> On Tue, Sep 20, 2016 at 05:36:45PM +0800, Zhiyong Yang wrote:
> > diff --git a/drivers/net/vhost/rte_eth_vhost.c
> > b/drivers/net/vhost/rte_eth_vhost.c
> > index 9157bf1..c3f404d 100644
> > --- a/drivers/net/vhost/rte_eth_vhost.c
> > +++ b/drivers/net/vhost/rte_eth_vhost.c
> > @@ -78,6 +78,7 @@ struct vhost_stats {
> >  	uint64_t missed_pkts;
> >  	uint64_t rx_bytes;
> >  	uint64_t tx_bytes;
> > +	uint64_t xstats[16];
> >  };
> >
> >  struct vhost_queue {
> > @@ -131,6 +132,252 @@ struct rte_vhost_vring_state {
> >
> >  static struct rte_vhost_vring_state *vring_states[RTE_MAX_ETHPORTS];
> >
> > +enum vhost_xstats_pkts {
> > +	VHOST_UNDERSIZE_PKT = 0,
> > +	VHOST_64_PKT,
> > +	VHOST_65_TO_127_PKT,
> > +	VHOST_128_TO_255_PKT,
> > +	VHOST_256_TO_511_PKT,
> > +	VHOST_512_TO_1023_PKT,
> > +	VHOST_1024_TO_1522_PKT,
> > +	VHOST_1523_TO_MAX_PKT,
> > +	VHOST_BROADCAST_PKT,
> > +	VHOST_MULTICAST_PKT,
> > +	VHOST_ERRORS_PKT,
> > +	VHOST_ERRORS_FRAGMENTED,
> > +	VHOST_ERRORS_JABBER,
> > +	VHOST_UNKNOWN_PROTOCOL,
> > +};
> 
> The definetion should go before "struct vhost_stats".
> 
> And VHOST_UNKNOWN_PROTOCOL should be VHOST_XSTATS_MAX. With
> that, the hardcoded number "16" could be avoided and replaced by it.
> 

Ok.

> > +
> > +#define VHOST_XSTATS_NAME_SIZE 64
> > +
> > +struct vhost_xstats_name_off {
> > +	char name[VHOST_XSTATS_NAME_SIZE];
> > +	uint64_t offset;
> > +};
> > +
> > +/* [rt]_qX_ is prepended to the name string here */ static const
> > +struct vhost_xstats_name_off vhost_rxq_stat_strings[] = {
> > +	{"good_packets",
> > +	 offsetof(struct vhost_queue, stats.rx_pkts)},
> > +	{"total_bytes",
> > +	 offsetof(struct vhost_queue, stats.rx_bytes)},
> > +	{"missed_pkts",
> > +	 offsetof(struct vhost_queue, stats.missed_pkts)},
> > +	{"broadcast_packets",
> > +	 offsetof(struct vhost_queue,
> stats.xstats[VHOST_BROADCAST_PKT])},
> > +	{"multicast_packets",
> > +	 offsetof(struct vhost_queue,
> stats.xstats[VHOST_MULTICAST_PKT])},
> > +	{"undersize_packets",
> > +	 offsetof(struct vhost_queue,
> stats.xstats[VHOST_UNDERSIZE_PKT])},
> > +	{"size_64_packets",
> > +	 offsetof(struct vhost_queue, stats.xstats[VHOST_64_PKT])},
> > +	{"size_65_to_127_packets",
> > +	 offsetof(struct vhost_queue,
> stats.xstats[VHOST_65_TO_127_PKT])},
> > +	{"size_128_to_255_packets",
> > +	 offsetof(struct vhost_queue,
> stats.xstats[VHOST_128_TO_255_PKT])},
> > +	{"size_256_to_511_packets",
> > +	 offsetof(struct vhost_queue,
> stats.xstats[VHOST_256_TO_511_PKT])},
> > +	{"size_512_to_1023_packets",
> > +	 offsetof(struct vhost_queue,
> stats.xstats[VHOST_512_TO_1023_PKT])},
> > +	{"size_1024_to_1522_packets",
> > +	 offsetof(struct vhost_queue,
> stats.xstats[VHOST_1024_TO_1522_PKT])},
> > +	{"size_1523_to_max_packets",
> > +	 offsetof(struct vhost_queue,
> stats.xstats[VHOST_1523_TO_MAX_PKT])},
> > +	{"errors_with_bad_CRC",
> > +	 offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_PKT])},
> > +	{"fragmented_errors",
> > +	 offsetof(struct vhost_queue,
> stats.xstats[VHOST_ERRORS_FRAGMENTED])},
> > +	{"jabber_errors",
> > +	 offsetof(struct vhost_queue,
> stats.xstats[VHOST_ERRORS_JABBER])},
> > +	{"unknown_protos_packets",
> > +	 offsetof(struct vhost_queue,
> > +stats.xstats[VHOST_UNKNOWN_PROTOCOL])},
> 
> Hmm, is "unknown_protos_packets" a valid stats? Why we should handle it
> here? Besides, it will always not be accessed, right? I mean,
> VHOST_NB_RXQ_XSTATS makes sure that.
> 

Yes, unknown_protos_packets is not accessed and always zero.
 I check the code that I40e driver implements similar counter by NIC register.
I introduce It to want  vhost xstats look like physical NIC for the applications
According  to RFC2863 Section ifInUnknownProtos.

 "For packet-oriented interfaces, the number of packets received via 
the interface which were discarded because of an unknown or 
unsupported protocol. For character-oriented or fixed-length 
interfaces that support protocol multiplexing the number of 
transmission units received via the interface which were discarded 
because of an unknown or unsupported protocol. For any interface 
that does not support protocol multiplexing, this counter will always be 0.

> > +
> > +/* [tx]_qX_ is prepended to the name string here */ static const
> > +struct vhost_xstats_name_off vhost_txq_stat_strings[] = {
> > +	{"good_packets",
> > +	 offsetof(struct vhost_queue, stats.tx_pkts)},
> > +	{"total_bytes",
> > +	 offsetof(struct vhost_queue, stats.tx_bytes)},
> > +	{"missed_pkts",
> > +	 offsetof(struct vhost_queue, stats.missed_pkts)},
> > +	{"broadcast_packets",
> > +	 offsetof(struct vhost_queue,
> stats.xstats[VHOST_BROADCAST_PKT])},
> > +	{"multicast_packets",
> > +	 offsetof(struct vhost_queue,
> stats.xstats[VHOST_MULTICAST_PKT])},
> > +	{"size_64_packets",
> > +	 offsetof(struct vhost_queue, stats.xstats[VHOST_64_PKT])},
> > +	{"size_65_to_127_packets",
> > +	 offsetof(struct vhost_queue,
> stats.xstats[VHOST_65_TO_127_PKT])},
> > +	{"size_128_to_255_packets",
> > +	 offsetof(struct vhost_queue,
> stats.xstats[VHOST_128_TO_255_PKT])},
> > +	{"size_256_to_511_packets",
> > +	 offsetof(struct vhost_queue,
> stats.xstats[VHOST_256_TO_511_PKT])},
> > +	{"size_512_to_1023_packets",
> > +	 offsetof(struct vhost_queue,
> stats.xstats[VHOST_512_TO_1023_PKT])},
> > +	{"size_1024_to_1522_packets",
> > +	 offsetof(struct vhost_queue,
> stats.xstats[VHOST_1024_TO_1522_PKT])},
> > +	{"size_1523_to_max_packets",
> > +	 offsetof(struct vhost_queue,
> stats.xstats[VHOST_1523_TO_MAX_PKT])},
> > +	{"errors_with_bad_CRC",
> > +	 offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_PKT])}, };
> > +
> > +#define VHOST_NB_RXQ_XSTATS (sizeof(vhost_rxq_stat_strings) / \
> > +			     sizeof(vhost_rxq_stat_strings[0]))
> > +
> > +#define VHOST_NB_TXQ_XSTATS (sizeof(vhost_txq_stat_strings) / \
> > +			     sizeof(vhost_txq_stat_strings[0]))
> > +
> > +static void
> > +vhost_dev_xstats_reset(struct rte_eth_dev *dev) {
> > +	struct vhost_queue *vqrx = NULL;
> > +	struct vhost_queue *vqtx = NULL;
> 
> I will not introduce two var here: I'd just define one, vq.
> 

Ok

> > +	unsigned int i = 0;
> > +
> > +	for (i = 0; i < dev->data->nb_rx_queues; i++) {
> > +		vqrx = dev->data->rx_queues[i];
> > +		if (!vqrx)
> > +			continue;
> > +		memset(&vqrx->stats, 0, sizeof(vqrx->stats));
> > +	}
> > +	for (i = 0; i < dev->data->nb_tx_queues; i++) {
> > +		vqtx = dev->data->tx_queues[i];
> > +		if (!vqtx)
> > +			continue;
> > +		memset(&vqtx->stats, 0, sizeof(vqtx->stats));
> > +	}
> > +}
> > +
> > +static int
> > +vhost_dev_xstats_get_names(struct rte_eth_dev *dev,
> > +			   struct rte_eth_xstat_name *xstats_names,
> > +			   unsigned int limit __rte_unused) {
> > +	unsigned int i = 0;
> > +	unsigned int t = 0;
> > +	int count = 0;
> > +	int nstats = dev->data->nb_rx_queues * VHOST_NB_RXQ_XSTATS
> > +			+ dev->data->nb_tx_queues *
> VHOST_NB_TXQ_XSTATS;
> > +
> > +	if (!xstats_names)
> > +		return nstats;
> > +
> > +	for (i = 0; i < dev->data->nb_rx_queues; i++) {
> > +		if (!dev->data->rx_queues[i])
> > +			continue;
> > +		for (t = 0; t < VHOST_NB_RXQ_XSTATS; t++) {
> > +			snprintf(xstats_names[count].name,
> > +				 sizeof(xstats_names[count].name),
> > +				 "rx_q%u_%s", i,
> > +				 vhost_rxq_stat_strings[t].name);
> > +				 count++;
> > +		}
> > +	}
> > +	for (i = 0; i < dev->data->nb_tx_queues; i++) {
> > +		if (!dev->data->tx_queues[i])
> > +			continue;
> > +		for (t = 0; t < VHOST_NB_TXQ_XSTATS; t++) {
> > +			snprintf(xstats_names[count].name,
> > +				 sizeof(xstats_names[count].name),
> > +				 "tx_q%u_%s", i,
> > +				 vhost_txq_stat_strings[t].name);
> > +				 count++;
> > +		}
> > +	}
> > +	return count;
> > +}
> > +
> > +static int
> > +vhost_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat
> *xstats,
> > +		     unsigned int n)
> > +{
> > +	unsigned int i;
> > +	unsigned int t;
> > +	unsigned int count = 0;
> > +	unsigned int nxstats = dev->data->nb_rx_queues *
> VHOST_NB_RXQ_XSTATS
> > +			     + dev->data->nb_tx_queues *
> VHOST_NB_TXQ_XSTATS;
> > +
> > +	if (n < nxstats)
> > +		return nxstats;
> > +
> > +	for (i = 0; i < dev->data->nb_rx_queues; i++) {
> > +		if (!dev->data->rx_queues[i])
> > +			continue;
> > +		for (t = 0; t < VHOST_NB_RXQ_XSTATS; t++) {
> > +			xstats[count].value =
> > +				*(uint64_t *)(((char *)dev->data-
> >rx_queues[i])
> > +				+ vhost_rxq_stat_strings[t].offset);
> > +			count++;
> > +		}
> > +	}
> > +	for (i = 0; i < dev->data->nb_tx_queues; i++) {
> > +		if (!dev->data->tx_queues[i])
> > +			continue;
> > +		for (t = 0; t < VHOST_NB_TXQ_XSTATS; t++) {
> > +			xstats[count].value =
> > +				*(uint64_t *)(((char *)dev->data-
> >tx_queues[i])
> > +				+ vhost_txq_stat_strings[t].offset);
> > +			count++;
> > +		}
> > +	}
> > +	return count;
> > +}
> 
> > +static void
> > +vhost_count_multicast_broadcast(struct vhost_queue *vq,
> > +				struct rte_mbuf **bufs,
> 
> I would let this function to count one mbuf, so that --->
> 
> > +				uint16_t begin,
> > +				uint16_t end)
> > +{
> > +	uint64_t i = 0;
> > +	struct ether_addr *ea = NULL;
> > +	struct vhost_stats *pstats = &vq->stats;
> > +
> > +	for (i = begin; i < end; i++) {
> > +		ea = rte_pktmbuf_mtod(bufs[i], struct ether_addr *);
> > +		if (is_multicast_ether_addr(ea)) {
> > +			if (is_broadcast_ether_addr(ea))
> > +				pstats->xstats[VHOST_BROADCAST_PKT]++;
> > +			else
> > +				pstats->xstats[VHOST_MULTICAST_PKT]++;
> > +		}
> > +	}
> > +}
> > +
> > +static void
> > +vhost_update_packet_xstats(struct vhost_queue *vq,
> > +			   struct rte_mbuf **bufs,
> > +			   uint16_t nb_rxtx)
> > +{
> > +	uint32_t pkt_len = 0;
> > +	uint64_t i = 0;
> > +	uint64_t index;
> > +	struct vhost_stats *pstats = &vq->stats;
> > +
> > +	for (i = 0; i < nb_rxtx ; i++) {
> > +		pkt_len = bufs[i]->pkt_len;
> > +		if (pkt_len == 64) {
> > +			pstats->xstats[VHOST_64_PKT]++;
> > +		} else if (pkt_len > 64 && pkt_len < 1024) {
> > +			index = (sizeof(pkt_len) * 8)
> > +				- __builtin_clz(pkt_len) - 5;
> > +			pstats->xstats[index]++;
> > +		} else {
> > +			if (pkt_len < 64)
> > +				pstats->xstats[VHOST_UNDERSIZE_PKT]++;
> > +			else if (pkt_len <= 1522)
> > +				pstats-
> >xstats[VHOST_1024_TO_1522_PKT]++;
> > +			else if (pkt_len > 1522)
> > +				pstats-
> >xstats[VHOST_1523_TO_MAX_PKT]++;
> > +		}
> 
> ... you could put it here, and save another for() loop introdueced by
> vhost_count_multicast_broadcast().
> 
> BTW, I will use simple vars, say "count", but not "nb_rxtx".

Ok.

> 
> 	--yliu

  reply	other threads:[~2016-09-21  7:23 UTC|newest]

Thread overview: 57+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-08-19 12:16 [PATCH] vhost: add pmd xstats Zhiyong Yang
2016-08-22  7:52 ` Panu Matilainen
2016-08-23  8:04   ` Yang, Zhiyong
2016-08-23  9:45     ` Panu Matilainen
2016-08-24  5:46       ` Yuanhan Liu
2016-08-24  8:44         ` Thomas Monjalon
2016-08-24 12:37           ` Panu Matilainen
2016-08-25  9:21             ` Yang, Zhiyong
2016-08-30  2:45               ` Yao, Lei A
2016-08-30  3:03                 ` Xu, Qian Q
2016-08-30  3:21                   ` Yao, Lei A
2016-08-31  7:18                     ` Yang, Zhiyong
2016-09-01  6:37                       ` Yang, Zhiyong
2016-09-09  8:15 ` [PATCH v2] net/vhost: " Zhiyong Yang
2016-09-09  8:40   ` Van Haaren, Harry
2016-09-09  8:54     ` Yang, Zhiyong
2016-09-14  6:20   ` Yuanhan Liu
2016-09-14  7:43     ` Yang, Zhiyong
2016-09-18 13:16       ` Yuanhan Liu
2016-09-19  2:48         ` Yang, Zhiyong
2016-09-14  8:30     ` Yang, Zhiyong
2016-09-20  9:36   ` [PATCH v3 0/2] net/vhost: add pmd xstats support Zhiyong Yang
2016-09-20  9:36     ` [PATCH v3 1/2] net/vhost: add a new stats struct Zhiyong Yang
2016-09-20 10:44       ` Yuanhan Liu
2016-09-21  5:12         ` Yang, Zhiyong
2016-09-21 10:05       ` [PATCH v4 0/2] net/vhost: add pmd xstats support Zhiyong Yang
2016-09-21 10:05         ` Zhiyong Yang
2016-09-21 10:05         ` [PATCH v4 2/2] net/vhost: add pmd xstats Zhiyong Yang
2016-09-21 10:57           ` Yuanhan Liu
2016-09-22  1:42             ` Yang, Zhiyong
2016-09-22  2:09               ` Yuanhan Liu
2016-09-21 10:13       ` [PATCH v4 1/2] net/vhost: add a new defined stats struct Zhiyong Yang
2016-09-22  8:19         ` [PATCH v5 0/2] net/vhost: add pmd xstats support Zhiyong Yang
2016-09-22  8:19           ` [PATCH v5 1/2] net/vhost: add a new defined stats struct Zhiyong Yang
2016-09-28  8:33             ` [PATCH v6 0/2] net/vhost: add pmd xstats support Zhiyong Yang
2016-09-28  8:33               ` [PATCH v6 1/2] net/vhost: add a new defined stats struct Zhiyong Yang
2016-09-28 13:26                 ` [PATCH v7 0/2] net/vhost: add pmd xstats support Zhiyong Yang
2016-09-28 13:26                   ` [PATCH v7 1/2] net/vhost: add a new defined stats struct Zhiyong Yang
2016-09-29  1:55                     ` Yuanhan Liu
2016-09-29 12:35                     ` [PATCH v8 0/2] net/vhost: add pmd xstats support Zhiyong Yang
2016-09-29 12:35                       ` [PATCH v8 1/2] net/vhost: add a new defined stats struct Zhiyong Yang
2016-09-29 12:35                       ` [PATCH v8 2/2] net/vhost: add pmd xstats Zhiyong Yang
2016-09-29 13:04                         ` Yuanhan Liu
2016-09-29 13:50                           ` Yang, Zhiyong
2016-09-28 13:26                   ` [PATCH v7 " Zhiyong Yang
2016-09-29  8:48                     ` Loftus, Ciara
2016-09-29 12:02                       ` Yuanhan Liu
2016-09-29 12:22                         ` Yang, Zhiyong
2016-09-28  8:33               ` [PATCH v6 " Zhiyong Yang
2016-09-22  8:19           ` [PATCH v5 " Zhiyong Yang
2016-09-23  3:56           ` [PATCH v5 0/2] net/vhost: add pmd xstats support Yuanhan Liu
2016-09-28  2:35             ` Yuanhan Liu
2016-09-20  9:36     ` [PATCH v3 2/2] net/vhost: add pmd xstats Zhiyong Yang
2016-09-20 10:56       ` Yuanhan Liu
2016-09-21  7:22         ` Yang, Zhiyong [this message]
2016-09-20 11:50       ` Yuanhan Liu
2016-09-21  6:15         ` Yang, Zhiyong

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=E182254E98A5DA4EB1E657AC7CB9BD2A3EB0B972@BGSMSX104.gar.corp.intel.com \
    --to=zhiyong.yang@intel.com \
    --cc=dev@dpdk.org \
    --cc=harry.van.haaren@intel.com \
    --cc=pmatilai@redhat.com \
    --cc=thomas.monjalon@6wind.com \
    --cc=yuanhan.liu@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.