From mboxrd@z Thu Jan 1 00:00:00 1970 From: Junjie Chen Subject: [PATCH] vhost: do deep copy while reallocate vq Date: Mon, 15 Jan 2018 06:32:19 -0500 Message-ID: <1516015939-11266-1-git-send-email-junjie.j.chen@intel.com> Cc: dev@dpdk.org, Junjie Chen To: yliu@fridaylinux.org, maxime.coquelin@redhat.com Return-path: Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id 08217727A for ; Mon, 15 Jan 2018 04:51:53 +0100 (CET) List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When vhost reallocate dev and vq for NUMA enabled case, it doesn't perform deep copy, which lead to 1) zmbuf list not valid 2) remote memory access. This patch is to re-initlize the zmbuf list and also do the deep copy. Signed-off-by: Junjie Chen --- lib/librte_vhost/vhost_user.c | 31 +++++++++++++++++++++++++++++++ 1 file changed, 31 insertions(+) diff --git a/lib/librte_vhost/vhost_user.c b/lib/librte_vhost/vhost_user.c index f4c7ce4..795462c 100644 --- a/lib/librte_vhost/vhost_user.c +++ b/lib/librte_vhost/vhost_user.c @@ -227,6 +227,7 @@ vhost_user_set_vring_num(struct virtio_net *dev, "zero copy is force disabled\n"); dev->dequeue_zero_copy = 0; } + TAILQ_INIT(&vq->zmbuf_list); } vq->shadow_used_ring = rte_malloc(NULL, @@ -261,6 +262,9 @@ numa_realloc(struct virtio_net *dev, int index) int oldnode, newnode; struct virtio_net *old_dev; struct vhost_virtqueue *old_vq, *vq; + struct zcopy_mbuf *new_zmbuf; + struct vring_used_elem *new_shadow_used_ring; + struct batch_copy_elem *new_batch_copy_elems; int ret; old_dev = dev; @@ -285,6 +289,33 @@ numa_realloc(struct virtio_net *dev, int index) return dev; memcpy(vq, old_vq, sizeof(*vq)); + TAILQ_INIT(&vq->zmbuf_list); + + new_zmbuf = rte_malloc_socket(NULL, vq->zmbuf_size * + sizeof(struct zcopy_mbuf), 0, newnode); + if (new_zmbuf) { + rte_free(vq->zmbufs); + vq->zmbufs = new_zmbuf; + } + + new_shadow_used_ring = rte_malloc_socket(NULL, + vq->size * sizeof(struct vring_used_elem), + RTE_CACHE_LINE_SIZE, + newnode); + if (new_shadow_used_ring) { + rte_free(vq->shadow_used_ring); + vq->shadow_used_ring = new_shadow_used_ring; + } + + new_batch_copy_elems = rte_malloc_socket(NULL, + vq->size * sizeof(struct batch_copy_elem), + RTE_CACHE_LINE_SIZE, + newnode); + if (new_batch_copy_elems) { + rte_free(vq->batch_copy_elems); + vq->batch_copy_elems = new_batch_copy_elems; + } + rte_free(old_vq); } -- 2.0.1