From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.1 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63C8FC4361B for ; Wed, 9 Dec 2020 18:02:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 103C823C92 for ; Wed, 9 Dec 2020 18:02:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731850AbgLISBp (ORCPT ); Wed, 9 Dec 2020 13:01:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41030 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727905AbgLISBo (ORCPT ); Wed, 9 Dec 2020 13:01:44 -0500 Received: from mail-pj1-x1042.google.com (mail-pj1-x1042.google.com [IPv6:2607:f8b0:4864:20::1042]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 26144C0613D6 for ; Wed, 9 Dec 2020 10:01:04 -0800 (PST) Received: by mail-pj1-x1042.google.com with SMTP id fh13so1306015pjb.0 for ; Wed, 09 Dec 2020 10:01:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pensando.io; s=google; h=subject:to:references:from:message-id:date:user-agent:mime-version :in-reply-to:content-transfer-encoding:content-language; bh=np6y/90WZ0jov8mQi+nBfEvz1hgvvktXYNQalTN2VOA=; b=z2OaNa0ZCJay85YKUnmgykcwdwgU529ig8NEhpJl4gi8tTJYO22Wv1PX4faCZqxRFm ngS8OCYbEwNHWyaazGl3Vx27plBCYWXFHVCg6tQHYUlFoCXd47gzgmV9bNDhg1Wbsf2t xxWfidLgAibRrDl8Pizvru/KHa93taeyVsmQY2INyTAF02jKertWJnw6EuWVd+Usxzvc 5BFbhJfDDmLWTiVH8u7/MNLzujS7euBb2VqD3blTlMPpeZpCGZhAXsnECcA5mTM5rCat AJWsGZJe9LKYBnKdMzkXMSVr2ObpKuSwC+rhYyvDNy6WmmPPbWjSp7e/S2pP339Uaopj 73Ww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-transfer-encoding :content-language; bh=np6y/90WZ0jov8mQi+nBfEvz1hgvvktXYNQalTN2VOA=; b=ZHV1mD0sjPvNdtiIta14zZ8QjdEdD3Jjvp+WzwL5BweIEwwooY8Z0KVoR+9NziqOQl FOrtKQSF1Ak+GX1WnnzeOjW0FCkelVnzG9IYaL2OpRrEr35Ct5KWnvcSIqZmVRtG6IV7 LqaQmdWTsQy376ezXvFWUXrvChna05oFAB2NM+CmGbvbx/ONd/vvPoFCQPaX66NgUGDz ZxQDUCVVMsDy6sYbkVcAfg0JaJ3Prv6kl233LzjRbjspjy2jLuyH/cl3O4uJgE7fSSWt CA1/XUGKhHLZYl4TB9WMcfp9kHxo0+pqmpWQ8jh/1FID0VTt9PSpeRAQplGEW9wchXLW IDrA== X-Gm-Message-State: AOAM532p9ApNsuAKA2yaE8SItfg3vEC1DbBz+dkv8Cx6lGklD83dOHwL 12GY6xoqAs9kJkAa6VdqLvyadV3IsuV05g== X-Google-Smtp-Source: ABdhPJzhG8+Gjb+Rl8ulzdWuzOtn6Rg5AAbTLhU/msat9UkjDbjuN9Ma1kT0DktaZoaLkcOWqxkzGA== X-Received: by 2002:a17:902:aa8b:b029:da:ef22:8675 with SMTP id d11-20020a170902aa8bb02900daef228675mr3228705plr.15.1607536863321; Wed, 09 Dec 2020 10:01:03 -0800 (PST) Received: from Shannons-MacBook-Pro.local (static-50-53-47-17.bvtn.or.frontiernet.net. [50.53.47.17]) by smtp.gmail.com with ESMTPSA id 14sm2806685pjm.21.2020.12.09.10.01.01 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 09 Dec 2020 10:01:02 -0800 (PST) Subject: Re: [PATCH 1/1] fix possible array overflow on receiving too many fragments for a packet To: Xiaohui Zhang , Jesse Brandeburg , Tony Nguyen , "David S . Miller" , Jakub Kicinski , Pensando Drivers , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org References: <20201208040638.40627-1-ruc_zhangxiaohui@163.com> From: Shannon Nelson Message-ID: Date: Wed, 9 Dec 2020 10:01:00 -0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Thunderbird/78.4.1 MIME-Version: 1.0 In-Reply-To: <20201208040638.40627-1-ruc_zhangxiaohui@163.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-US Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 12/7/20 8:06 PM, Xiaohui Zhang wrote: > From: Zhang Xiaohui > > If the hardware receives an oversized packet with too many rx fragments, > skb_shinfo(skb)->frags can overflow and corrupt memory of adjacent pages. > This becomes especially visible if it corrupts the freelist pointer of > a slab page. > I found these two code fragments were very similar to the vulnerable code > in CVE-2020-12465, so I submitted these two patches. > > Signed-off-by: Zhang Xiaohui > --- > drivers/net/ethernet/intel/ice/ice_txrx.c | 4 +++- > drivers/net/ethernet/pensando/ionic/ionic_txrx.c | 4 +++- > 2 files changed, 6 insertions(+), 2 deletions(-) > > diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c > index eae75260f..f0a252208 100644 > --- a/drivers/net/ethernet/intel/ice/ice_txrx.c > +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c > @@ -821,9 +821,11 @@ ice_add_rx_frag(struct ice_ring *rx_ring, struct ice_rx_buf *rx_buf, > unsigned int truesize = ice_rx_pg_size(rx_ring) / 2; > #endif > > + struct skb_shared_info *shinfo = skb_shinfo(skb); This declaration should be up directly below the #endif and a blank line inserted before the code. > if (!size) > return; > - skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buf->page, > + if (shinfo->nr_frags < ARRAY_SIZE(shinfo->frags)) > + skb_add_rx_frag(skb, shinfo, rx_buf->page, > rx_buf->page_offset, size, truesize); > > /* page is being used so we must update the page offset */ > diff --git a/drivers/net/ethernet/pensando/ionic/ionic_txrx.c b/drivers/net/ethernet/pensando/ionic/ionic_txrx.c > index 169ac4f54..d30e83a4b 100644 > --- a/drivers/net/ethernet/pensando/ionic/ionic_txrx.c > +++ b/drivers/net/ethernet/pensando/ionic/ionic_txrx.c > @@ -74,6 +74,7 @@ static struct sk_buff *ionic_rx_frags(struct ionic_queue *q, > struct device *dev = q->lif->ionic->dev; > struct ionic_page_info *page_info; > struct sk_buff *skb; > + struct skb_shared_info *shinfo = skb_shinfo(skb); As the kernel test robot has suggested, this is using an uninitialized skb and will likely cause great unhappiness. Also, this needs to follow the "reverse xmas tree" formatting style for declarations. > unsigned int i; > u16 frag_len; > u16 len; > @@ -102,7 +103,8 @@ static struct sk_buff *ionic_rx_frags(struct ionic_queue *q, > > dma_unmap_page(dev, dma_unmap_addr(page_info, dma_addr), > PAGE_SIZE, DMA_FROM_DEVICE); > - skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, > + if (shinfo->nr_frags < ARRAY_SIZE(shinfo->frags)) > + skb_add_rx_frag(skb, shinfo->nr_frags, > page_info->page, 0, frag_len, PAGE_SIZE); I'm still not convinced this is necessary here, and I'm still not thrilled with the result of just quietly dropping the fragments. A better answer here might be to check the ARRAY_SIZE against comp->num_sg_elements before allocating the skb, and if too big, then return NULL - this gets the check done before any allocations are made, and the packet will be properly dropped and the drop statistic incremented. sln > page_info->page = NULL; > page_info++; From mboxrd@z Thu Jan 1 00:00:00 1970 From: Shannon Nelson Date: Wed, 9 Dec 2020 10:01:00 -0800 Subject: [Intel-wired-lan] [PATCH 1/1] fix possible array overflow on receiving too many fragments for a packet In-Reply-To: <20201208040638.40627-1-ruc_zhangxiaohui@163.com> References: <20201208040638.40627-1-ruc_zhangxiaohui@163.com> Message-ID: MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: intel-wired-lan@osuosl.org List-ID: On 12/7/20 8:06 PM, Xiaohui Zhang wrote: > From: Zhang Xiaohui > > If the hardware receives an oversized packet with too many rx fragments, > skb_shinfo(skb)->frags can overflow and corrupt memory of adjacent pages. > This becomes especially visible if it corrupts the freelist pointer of > a slab page. > I found these two code fragments were very similar to the vulnerable code > in CVE-2020-12465, so I submitted these two patches. > > Signed-off-by: Zhang Xiaohui > --- > drivers/net/ethernet/intel/ice/ice_txrx.c | 4 +++- > drivers/net/ethernet/pensando/ionic/ionic_txrx.c | 4 +++- > 2 files changed, 6 insertions(+), 2 deletions(-) > > diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c > index eae75260f..f0a252208 100644 > --- a/drivers/net/ethernet/intel/ice/ice_txrx.c > +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c > @@ -821,9 +821,11 @@ ice_add_rx_frag(struct ice_ring *rx_ring, struct ice_rx_buf *rx_buf, > unsigned int truesize = ice_rx_pg_size(rx_ring) / 2; > #endif > > + struct skb_shared_info *shinfo = skb_shinfo(skb); This declaration should be up directly below the #endif and a blank line inserted before the code. > if (!size) > return; > - skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buf->page, > + if (shinfo->nr_frags < ARRAY_SIZE(shinfo->frags)) > + skb_add_rx_frag(skb, shinfo, rx_buf->page, > rx_buf->page_offset, size, truesize); > > /* page is being used so we must update the page offset */ > diff --git a/drivers/net/ethernet/pensando/ionic/ionic_txrx.c b/drivers/net/ethernet/pensando/ionic/ionic_txrx.c > index 169ac4f54..d30e83a4b 100644 > --- a/drivers/net/ethernet/pensando/ionic/ionic_txrx.c > +++ b/drivers/net/ethernet/pensando/ionic/ionic_txrx.c > @@ -74,6 +74,7 @@ static struct sk_buff *ionic_rx_frags(struct ionic_queue *q, > struct device *dev = q->lif->ionic->dev; > struct ionic_page_info *page_info; > struct sk_buff *skb; > + struct skb_shared_info *shinfo = skb_shinfo(skb); As the kernel test robot has suggested, this is using an uninitialized skb and will likely cause great unhappiness. Also, this needs to follow the "reverse xmas tree" formatting style for declarations. > unsigned int i; > u16 frag_len; > u16 len; > @@ -102,7 +103,8 @@ static struct sk_buff *ionic_rx_frags(struct ionic_queue *q, > > dma_unmap_page(dev, dma_unmap_addr(page_info, dma_addr), > PAGE_SIZE, DMA_FROM_DEVICE); > - skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, > + if (shinfo->nr_frags < ARRAY_SIZE(shinfo->frags)) > + skb_add_rx_frag(skb, shinfo->nr_frags, > page_info->page, 0, frag_len, PAGE_SIZE); I'm still not convinced this is necessary here, and I'm still not thrilled with the result of just quietly dropping the fragments. A better answer here might be to check the ARRAY_SIZE against comp->num_sg_elements before allocating the skb, and if too big, then return NULL - this gets the check done before any allocations are made, and the packet will be properly dropped and the drop statistic incremented. sln > page_info->page = NULL; > page_info++;