From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53717C43387 for ; Thu, 17 Jan 2019 11:24:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1F24C2054F for ; Thu, 17 Jan 2019 11:24:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728291AbfAQLYV (ORCPT ); Thu, 17 Jan 2019 06:24:21 -0500 Received: from tama50.ecl.ntt.co.jp ([129.60.39.147]:47286 "EHLO tama50.ecl.ntt.co.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725889AbfAQLYU (ORCPT ); Thu, 17 Jan 2019 06:24:20 -0500 Received: from vc1.ecl.ntt.co.jp (vc1.ecl.ntt.co.jp [129.60.86.153]) by tama50.ecl.ntt.co.jp (8.13.8/8.13.8) with ESMTP id x0HBOFlT021763; Thu, 17 Jan 2019 20:24:15 +0900 Received: from vc1.ecl.ntt.co.jp (localhost [127.0.0.1]) by vc1.ecl.ntt.co.jp (Postfix) with ESMTP id 62B70EA816E; Thu, 17 Jan 2019 20:24:15 +0900 (JST) Received: from jcms-pop21.ecl.ntt.co.jp (jcms-pop21.ecl.ntt.co.jp [129.60.87.134]) by vc1.ecl.ntt.co.jp (Postfix) with ESMTP id 4E510EA816C; Thu, 17 Jan 2019 20:24:15 +0900 (JST) Received: from makita-ubuntu.m.ecl.ntt.co.jp (unknown [129.60.241.182]) by jcms-pop21.ecl.ntt.co.jp (Postfix) with ESMTPSA id 45F4C400B79; Thu, 17 Jan 2019 20:24:15 +0900 (JST) From: Toshiaki Makita Subject: [PATCH net 5/7] virtio_net: Don't process redirected XDP frames when XDP is disabled Date: Thu, 17 Jan 2019 20:20:43 +0900 Message-Id: <1547724045-2726-6-git-send-email-makita.toshiaki@lab.ntt.co.jp> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1547724045-2726-1-git-send-email-makita.toshiaki@lab.ntt.co.jp> References: <1547724045-2726-1-git-send-email-makita.toshiaki@lab.ntt.co.jp> X-CC-Mail-RelayStamp: 1 To: "David S. Miller" , "Michael S. Tsirkin" , Jason Wang Cc: Toshiaki Makita , netdev@vger.kernel.org, virtualization@lists.linux-foundation.org, Jesper Dangaard Brouer X-TM-AS-MML: disable Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Commit 8dcc5b0ab0ec ("virtio_net: fix ndo_xdp_xmit crash towards dev not ready for XDP") tried to avoid access to unexpected sq while XDP is disabled, but was not complete. There was a small window which causes out of bounds sq access in virtnet_xdp_xmit() while disabling XDP. An example case of - curr_queue_pairs = 6 (2 for SKB and 4 for XDP) - online_cpu_num = xdp_queue_paris = 4 when XDP is enabled: CPU 0 CPU 1 (Disabling XDP) (Processing redirected XDP frames) virtnet_xdp_xmit() virtnet_xdp_set() _virtnet_set_queues() set curr_queue_pairs (2) check if rq->xdp_prog is not NULL virtnet_xdp_sq(vi) qp = curr_queue_pairs - xdp_queue_pairs + smp_processor_id() = 2 - 4 + 1 = -1 sq = &vi->sq[qp] // out of bounds access set xdp_queue_pairs (0) rq->xdp_prog = NULL Basically we should not change curr_queue_pairs and xdp_queue_pairs while someone can read the values. Thus, when disabling XDP, assign NULL to rq->xdp_prog first, and wait for RCU grace period, then change xxx_queue_pairs. Note that we need to keep the current order when enabling XDP though. Fixes: 186b3c998c50 ("virtio-net: support XDP_REDIRECT") Signed-off-by: Toshiaki Makita --- drivers/net/virtio_net.c | 32 +++++++++++++++++++++++++------- 1 file changed, 25 insertions(+), 7 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 204eedf..ae93f0e 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -2424,14 +2424,16 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog, } } - err = _virtnet_set_queues(vi, curr_qp + xdp_qp); - if (err) - goto err; - netif_set_real_num_rx_queues(dev, curr_qp + xdp_qp); - vi->xdp_queue_pairs = xdp_qp; + old_prog = rtnl_dereference(vi->rq[0].xdp_prog); + if (!old_prog && prog) { + err = _virtnet_set_queues(vi, curr_qp + xdp_qp); + if (err) + goto err_new_prog; + netif_set_real_num_rx_queues(dev, curr_qp + xdp_qp); + vi->xdp_queue_pairs = xdp_qp; + } for (i = 0; i < vi->max_queue_pairs; i++) { - old_prog = rtnl_dereference(vi->rq[i].xdp_prog); rcu_assign_pointer(vi->rq[i].xdp_prog, prog); if (i == 0) { if (!old_prog) @@ -2439,6 +2441,18 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog, if (!prog) virtnet_restore_guest_offloads(vi); } + } + + if (old_prog && !prog) { + synchronize_net(); + err = _virtnet_set_queues(vi, curr_qp + xdp_qp); + if (err) + goto err_old_prog; + netif_set_real_num_rx_queues(dev, curr_qp + xdp_qp); + vi->xdp_queue_pairs = xdp_qp; + } + + for (i = 0; i < vi->max_queue_pairs; i++) { if (old_prog) bpf_prog_put(old_prog); if (netif_running(dev)) { @@ -2450,7 +2464,11 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog, return 0; -err: +err_old_prog: + virtnet_clear_guest_offloads(vi); + for (i = 0; i < vi->max_queue_pairs; i++) + rcu_assign_pointer(vi->rq[i].xdp_prog, old_prog); +err_new_prog: if (netif_running(dev)) { for (i = 0; i < vi->max_queue_pairs; i++) { virtnet_napi_enable(vi->rq[i].vq, &vi->rq[i].napi); -- 1.8.3.1