From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.5 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, MENTIONS_GIT_HOSTING,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2BD68C43387 for ; Wed, 2 Jan 2019 01:43:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id DF1AD208E3 for ; Wed, 2 Jan 2019 01:43:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="Lopqunwu" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728379AbfABBmo (ORCPT ); Tue, 1 Jan 2019 20:42:44 -0500 Received: from aserp2130.oracle.com ([141.146.126.79]:50794 "EHLO aserp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727806AbfABBmn (ORCPT ); Tue, 1 Jan 2019 20:42:43 -0500 Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1]) by aserp2130.oracle.com (8.16.0.22/8.16.0.22) with SMTP id x021YMVC009235; Wed, 2 Jan 2019 01:42:38 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc : references : from : message-id : date : mime-version : in-reply-to : content-type : content-transfer-encoding; s=corp-2018-07-02; bh=82cQUolvA5dyGeWtg4YU+neFztLWSZNFGYOSdTHtYyc=; b=Lopqunwua84iIIGtBtELCK6nvX9Wn6pdM5M3R6koeAmLLJJF6pbNCiekdx1ylFMkr8aJ QfZS1mGiXQw4xC1l2dwDT17Y6Y7I2oelQct14DUglXV9bQI3h6lsbqhx9ZNM3INan86W Ji/TjMjiaeURsiWSpUnJbBeNw4/f5V474yalDiycgAXurWB4dtPggROJS61UqCC8mXVn 33RMxzFv8aRW/XtauIkuV12j2o8tU+uXHBQnTq7p0DLejzl6/SoBP6YEj8MelOxwE+qf KiDJxppAb7wp8b+DLdDAga61EMIG57ul5UKcq2yhmPtbGkgNG8jYVkqKrXkbh3sJTjip mw== Received: from aserv0021.oracle.com (aserv0021.oracle.com [141.146.126.233]) by aserp2130.oracle.com with ESMTP id 2pnxee07t0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 02 Jan 2019 01:42:38 +0000 Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235]) by aserv0021.oracle.com (8.14.4/8.14.4) with ESMTP id x021gbFu020408 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 2 Jan 2019 01:42:38 GMT Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12]) by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id x021gbf0013006; Wed, 2 Jan 2019 01:42:37 GMT Received: from [10.182.71.8] (/10.182.71.8) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Tue, 01 Jan 2019 17:42:37 -0800 Subject: Re: [PATCH] net/mlx4_en: ensure rx_desc updating reaches HW before prod db updating To: Tariq Toukan , Eric Dumazet , Jason Gunthorpe Cc: "junxiao.bi@oracle.com" , "netdev@vger.kernel.org" , "linux-rdma@vger.kernel.org" , "linux-kernel@vger.kernel.org" , Saeed Mahameed References: <1515728542-3060-1-git-send-email-jianchao.w.wang@oracle.com> <20180112163247.GB15974@ziepe.ca> <1515775567.131759.42.camel@gmail.com> <53b1ac4d-a294-eb98-149e-65d7954243da@oracle.com> <1516376999.3606.39.camel@gmail.com> <339a7156-9ef1-1f3c-30b8-3cc3558d124e@mellanox.com> <532b4d71-e2eb-35f3-894e-1c3288e7bc3f@oracle.com> <1516852543.3715.43.camel@gmail.com> <89066a75-43db-0f62-f171-70b0abaa8ea0@oracle.com> <918db4ec-8c3c-aafa-4be6-0e00a99632e2@mellanox.com> From: "jianchao.wang" Message-ID: <61230e99-9b16-e1f2-93cf-637f0280f4df@oracle.com> Date: Wed, 2 Jan 2019 09:43:56 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.2.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=9123 signatures=668680 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1901020012 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 12/31/18 12:27 AM, Tariq Toukan wrote: > > > On 1/27/2018 2:41 PM, jianchao.wang wrote: >> Hi Tariq >> >> Thanks for your kindly response. >> That's really appreciated. >> >> On 01/25/2018 05:54 PM, Tariq Toukan wrote: >>> >>> >>> On 25/01/2018 8:25 AM, jianchao.wang wrote: >>>> Hi Eric >>>> >>>> Thanks for you kindly response and suggestion. >>>> That's really appreciated. >>>> >>>> Jianchao >>>> >>>> On 01/25/2018 11:55 AM, Eric Dumazet wrote: >>>>> On Thu, 2018-01-25 at 11:27 +0800, jianchao.wang wrote: >>>>>> Hi Tariq >>>>>> >>>>>> On 01/22/2018 10:12 AM, jianchao.wang wrote: >>>>>>>>> On 19/01/2018 5:49 PM, Eric Dumazet wrote: >>>>>>>>>> On Fri, 2018-01-19 at 23:16 +0800, jianchao.wang wrote: >>>>>>>>>>> Hi Tariq >>>>>>>>>>> >>>>>>>>>>> Very sad that the crash was reproduced again after applied the patch. >>>>>>>> >>>>>>>> Memory barriers vary for different Archs, can you please share more details regarding arch and repro steps? >>>>>>> The hardware is HP ProLiant DL380 Gen9/ProLiant DL380 Gen9, BIOS P89 12/27/2015 >>>>>>> The xen is installed. The crash occurred in DOM0. >>>>>>> Regarding to the repro steps, it is a customer's test which does heavy disk I/O over NFS storage without any guest. >>>>>>> >>>>>> >>>>>> What is the finial suggestion on this ? >>>>>> If use wmb there, is the performance pulled down ? >>> >>> I want to evaluate this effect. >>> I agree with Eric, expected impact is restricted, especially after batching the allocations.> >>>>> >>>>> Since https://urldefense.proofpoint.com/v2/url?u=https-3A__git.kernel.org_pub_scm_linux_kernel_git_davem_net-2Dnext.git_commit_-3Fid-3Ddad42c3038a59d27fced28ee4ec1d4a891b28155&d=DwICaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=7WdAxUBeiTUTCy8v-7zXyr4qk7sx26ATvfo6QSTvZyQ&m=c0oI8duFkyFBILMQYDsqRApHQrOlLY_2uGiz_utcd7s&s=E4_XKmSI0B63qB0DLQ1EX_fj1bOP78ZdeYADBf33B-k&e= >>>>> >>>>> we batch allocations, so mlx4_en_refill_rx_buffers() is not called that often. >>>>> >>>>> I doubt the additional wmb() will have serious impact there. >>>>> >>> >>> I will test the effect (it'll be beginning of next week). >>> I'll update so we can make a more confident decision. >>> >> I have also sent patches with wmb and batching allocations to customer and let them check whether the performance is impacted. >> And update here asap when get feedback. >> >>> Thanks, >>> Tariq >>> > > Hi Jianchao, > > I am interested to push this bug fix. > Do you want me to submit, or do it yourself? > Can you elaborate regarding the arch with the repro? > > This is the patch I suggest: > > --- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c > +++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c > @@ -161,6 +161,8 @@ static bool mlx4_en_is_ring_empty(const struct > mlx4_en_rx_ring *ring) > > static inline void mlx4_en_update_rx_prod_db(struct mlx4_en_rx_ring *ring) > { > + /* ensure rx_desc updating reaches HW before prod db updating */ > + wmb(); > *ring->wqres.db.db = cpu_to_be32(ring->prod & 0xffff); > } > Hi Tariq Happy new year ! The customer provided confused test result for us. The fix cannot fix their issue. And we finally find a upstream fix 5d70bd5c98d0e655bde2aae2b5251bdd44df5e71 (net/mlx4_en: fix potential use-after-free with dma_unmap_page) and killed the issue during Oct 2018. That's really a long way. Please go ahead with this patch. Thanks Jianchao