From: Gavin Li <gavinl@nvidia.com>
To: <mst@redhat.com>, <jasowang@redhat.com>,
<xuanzhuo@linux.alibaba.com>, <davem@davemloft.net>,
<edumazet@google.com>, <kuba@kernel.org>, <pabeni@redhat.com>,
<ast@kernel.org>, <daniel@iogearbox.net>, <hawk@kernel.org>,
<john.fastabend@gmail.com>, <jiri@nvidia.com>,
<dtatulea@nvidia.com>
Cc: <gavi@nvidia.com>, <virtualization@lists.linux-foundation.org>,
<netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
<bpf@vger.kernel.org>
Subject: [PATCH net-next V2 1/4] virtio_net: extract interrupt coalescing settings to a structure
Date: Mon, 17 Jul 2023 17:30:34 +0300 [thread overview]
Message-ID: <20230717143037.21858-2-gavinl@nvidia.com> (raw)
In-Reply-To: <20230717143037.21858-1-gavinl@nvidia.com>
Extract interrupt coalescing settings to a structure so that it could be
reused in other data structures.
Signed-off-by: Gavin Li <gavinl@nvidia.com>
Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com>
Reviewed-by: Jiri Pirko <jiri@nvidia.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
---
drivers/net/virtio_net.c | 35 +++++++++++++++++++----------------
1 file changed, 19 insertions(+), 16 deletions(-)
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 0db14f6b87d3..dd5fec073a27 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -126,6 +126,11 @@ static const struct virtnet_stat_desc virtnet_rq_stats_desc[] = {
#define VIRTNET_SQ_STATS_LEN ARRAY_SIZE(virtnet_sq_stats_desc)
#define VIRTNET_RQ_STATS_LEN ARRAY_SIZE(virtnet_rq_stats_desc)
+struct virtnet_interrupt_coalesce {
+ u32 max_packets;
+ u32 max_usecs;
+};
+
/* Internal representation of a send virtqueue */
struct send_queue {
/* Virtqueue associated with this send _queue */
@@ -281,10 +286,8 @@ struct virtnet_info {
u32 speed;
/* Interrupt coalescing settings */
- u32 tx_usecs;
- u32 rx_usecs;
- u32 tx_max_packets;
- u32 rx_max_packets;
+ struct virtnet_interrupt_coalesce intr_coal_tx;
+ struct virtnet_interrupt_coalesce intr_coal_rx;
unsigned long guest_offloads;
unsigned long guest_offloads_capable;
@@ -3056,8 +3059,8 @@ static int virtnet_send_notf_coal_cmds(struct virtnet_info *vi,
return -EINVAL;
/* Save parameters */
- vi->tx_usecs = ec->tx_coalesce_usecs;
- vi->tx_max_packets = ec->tx_max_coalesced_frames;
+ vi->intr_coal_tx.max_usecs = ec->tx_coalesce_usecs;
+ vi->intr_coal_tx.max_packets = ec->tx_max_coalesced_frames;
vi->ctrl->coal_rx.rx_usecs = cpu_to_le32(ec->rx_coalesce_usecs);
vi->ctrl->coal_rx.rx_max_packets = cpu_to_le32(ec->rx_max_coalesced_frames);
@@ -3069,8 +3072,8 @@ static int virtnet_send_notf_coal_cmds(struct virtnet_info *vi,
return -EINVAL;
/* Save parameters */
- vi->rx_usecs = ec->rx_coalesce_usecs;
- vi->rx_max_packets = ec->rx_max_coalesced_frames;
+ vi->intr_coal_rx.max_usecs = ec->rx_coalesce_usecs;
+ vi->intr_coal_rx.max_packets = ec->rx_max_coalesced_frames;
return 0;
}
@@ -3132,10 +3135,10 @@ static int virtnet_get_coalesce(struct net_device *dev,
struct virtnet_info *vi = netdev_priv(dev);
if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_NOTF_COAL)) {
- ec->rx_coalesce_usecs = vi->rx_usecs;
- ec->tx_coalesce_usecs = vi->tx_usecs;
- ec->tx_max_coalesced_frames = vi->tx_max_packets;
- ec->rx_max_coalesced_frames = vi->rx_max_packets;
+ ec->rx_coalesce_usecs = vi->intr_coal_rx.max_usecs;
+ ec->tx_coalesce_usecs = vi->intr_coal_tx.max_usecs;
+ ec->tx_max_coalesced_frames = vi->intr_coal_tx.max_packets;
+ ec->rx_max_coalesced_frames = vi->intr_coal_rx.max_packets;
} else {
ec->rx_max_coalesced_frames = 1;
@@ -4119,10 +4122,10 @@ static int virtnet_probe(struct virtio_device *vdev)
}
if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_NOTF_COAL)) {
- vi->rx_usecs = 0;
- vi->tx_usecs = 0;
- vi->tx_max_packets = 0;
- vi->rx_max_packets = 0;
+ vi->intr_coal_rx.max_usecs = 0;
+ vi->intr_coal_tx.max_usecs = 0;
+ vi->intr_coal_tx.max_packets = 0;
+ vi->intr_coal_rx.max_packets = 0;
}
if (virtio_has_feature(vdev, VIRTIO_NET_F_HASH_REPORT))
--
2.39.1
next prev parent reply other threads:[~2023-07-17 14:31 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-17 14:30 [PATCH net-next V2 0/4] virtio_net: add per queue interrupt coalescing support Gavin Li
2023-07-17 14:30 ` Gavin Li [this message]
2023-07-17 14:30 ` [PATCH net-next V2 2/4] virtio_net: extract get/set interrupt coalesce to a function Gavin Li
2023-07-17 14:30 ` [PATCH net-next V2 3/4] virtio_net: support per queue interrupt coalesce command Gavin Li
2023-07-17 15:22 ` Gavin Li
2023-07-18 3:29 ` Heng Qi
2023-07-18 6:28 ` Gavin Li
2023-07-18 3:37 ` Heng Qi
2023-07-18 6:45 ` Gavin Li
2023-07-19 12:19 ` Heng Qi
2023-07-17 14:30 ` [PATCH net-next V2 4/4] virtio_net: enable per queue interrupt coalesce feature Gavin Li
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230717143037.21858-2-gavinl@nvidia.com \
--to=gavinl@nvidia.com \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=dtatulea@nvidia.com \
--cc=edumazet@google.com \
--cc=gavi@nvidia.com \
--cc=hawk@kernel.org \
--cc=jasowang@redhat.com \
--cc=jiri@nvidia.com \
--cc=john.fastabend@gmail.com \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=virtualization@lists.linux-foundation.org \
--cc=xuanzhuo@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).