From: "jianchao.wang" <jianchao.w.wang@oracle.com>
To: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Eric Dumazet <eric.dumazet@gmail.com>,
Tariq Toukan <tariqt@mellanox.com>,
junxiao.bi@oracle.com, netdev@vger.kernel.org,
linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org,
Saeed Mahameed <saeedm@mellanox.com>
Subject: Re: [PATCH] net/mlx4_en: ensure rx_desc updating reaches HW before prod db updating
Date: Tue, 23 Jan 2018 11:25:55 +0800 [thread overview]
Message-ID: <f577e24f-e718-9047-8ae3-6ce7fcb9ec18@oracle.com> (raw)
In-Reply-To: <20180122154734.GD14372@ziepe.ca>
Hi Jason
Thanks for your kindly response.
On 01/22/2018 11:47 PM, Jason Gunthorpe wrote:
>>> Yeah, mlx4 NICs in Google fleet receive trillions of packets per
>>> second, and we never noticed an issue.
>>>
>>> Although we are using a slightly different driver, using order-0 pages
>>> and fast pages recycling.
>>>
>>>
>> The driver we use will will set the page reference count to (size of pages)/stride, the
>> pages will be freed by networking stack when the reference become zero, and the order-3
>> pages maybe allocated soon, this give NIC device a chance to corrupt the pages which have
>> been allocated by others, such as slab.
> But it looks like the wmb() is placed when stuffing new rx descriptors
> into the device - how can it prevent corruption of pages where
> ownership was transfered from device to the host? That sounds more like a
> rmb() is missing someplace to me...
>
The device may see the prod_db updating before rx_desc updating.
Then it will get stale rx_descs.
These stale rx_descs may contain pages that has been freed to host.
> (Granted the missing wmb() is a bug, but it may not be fully solving this
> issue??)the wmb() here fix one of the customer's test case.
Thanks
Jianchao
next prev parent reply other threads:[~2018-01-23 3:26 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-01-12 3:42 [PATCH] net/mlx4_en: ensure rx_desc updating reaches HW before prod db updating Jianchao Wang
2018-01-12 16:32 ` Jason Gunthorpe
2018-01-12 16:46 ` Eric Dumazet
2018-01-12 19:53 ` Saeed Mahameed
2018-01-12 20:16 ` Eric Dumazet
2018-01-12 21:01 ` Saeed Mahameed
2018-01-12 21:21 ` Eric Dumazet
2018-01-13 19:15 ` Jason Gunthorpe
2018-01-14 2:40 ` jianchao.wang
2018-01-14 9:47 ` Tariq Toukan
2018-01-15 5:50 ` jianchao.wang
2018-01-19 15:16 ` jianchao.wang
2018-01-19 15:49 ` Eric Dumazet
2018-01-21 9:31 ` Tariq Toukan
2018-01-21 16:24 ` Tariq Toukan
2018-01-21 16:43 ` Eric Dumazet
2018-01-22 2:40 ` jianchao.wang
2018-01-22 15:47 ` Jason Gunthorpe
2018-01-23 3:25 ` jianchao.wang [this message]
2018-01-22 2:12 ` jianchao.wang
2018-01-25 3:27 ` jianchao.wang
2018-01-25 3:55 ` Eric Dumazet
2018-01-25 6:25 ` jianchao.wang
2018-01-25 9:54 ` Tariq Toukan
2018-01-27 12:41 ` jianchao.wang
[not found] ` <d9883261-e93e-400a-757c-3a81d8b6aca1@mellanox.com>
2019-01-02 1:43 ` jianchao.wang
2018-01-21 20:40 ` Jason Gunthorpe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f577e24f-e718-9047-8ae3-6ce7fcb9ec18@oracle.com \
--to=jianchao.w.wang@oracle.com \
--cc=eric.dumazet@gmail.com \
--cc=jgg@ziepe.ca \
--cc=junxiao.bi@oracle.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=saeedm@mellanox.com \
--cc=tariqt@mellanox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).