From: Eric Dumazet <eric.dumazet@gmail.com>
To: Saeed Mahameed <saeedm@mellanox.com>,
Jason Gunthorpe <jgg@ziepe.ca>,
Jianchao Wang <jianchao.w.wang@oracle.com>
Cc: tariqt@mellanox.com, junxiao.bi@oracle.com,
netdev@vger.kernel.org, linux-rdma@vger.kernel.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH] net/mlx4_en: ensure rx_desc updating reaches HW before prod db updating
Date: Fri, 12 Jan 2018 12:16:31 -0800 [thread overview]
Message-ID: <1515788191.131759.48.camel@gmail.com> (raw)
In-Reply-To: <85116e56-52b1-944d-6ee2-916ccfc3a7a6@mellanox.com>
On Fri, 2018-01-12 at 11:53 -0800, Saeed Mahameed wrote:
>
> On 01/12/2018 08:46 AM, Eric Dumazet wrote:
> > On Fri, 2018-01-12 at 09:32 -0700, Jason Gunthorpe wrote:
> > > On Fri, Jan 12, 2018 at 11:42:22AM +0800, Jianchao Wang wrote:
> > > > Customer reported memory corruption issue on previous mlx4_en driver
> > > > version where the order-3 pages and multiple page reference counting
> > > > were still used.
> > > >
> > > > Finally, find out one of the root causes is that the HW may see stale
> > > > rx_descs due to prod db updating reaches HW before rx_desc. Especially
> > > > when cross order-3 pages boundary and update a new one, HW may write
> > > > on the pages which may has been freed and allocated again by others.
> > > >
> > > > To fix it, add a wmb between rx_desc and prod db updating to ensure
> > > > the order. Even thougth order-0 and page recycling has been introduced,
> > > > the disorder between rx_desc and prod db still could lead to corruption
> > > > on different inbound packages.
> > > >
> > > > Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
> > > > drivers/net/ethernet/mellanox/mlx4/en_rx.c | 2 +-
> > > > 1 file changed, 1 insertion(+), 1 deletion(-)
> > > >
> > > > diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
> > > > index 85e28ef..eefa82c 100644
> > > > +++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
> > > > @@ -555,7 +555,7 @@ static void mlx4_en_refill_rx_buffers(struct mlx4_en_priv *priv,
> > > > break;
> > > > ring->prod++;
> > > > } while (likely(--missing));
> > > > -
> > > > + wmb(); /* ensure rx_desc updating reaches HW before prod db updating */
> > > > mlx4_en_update_rx_prod_db(ring);
> > > > }
> > > >
> > >
> > > Does this need to be dma_wmb(), and should it be in
> > > mlx4_en_update_rx_prod_db ?
> > >
> >
> > +1 on dma_wmb()
> >
> > On what architecture bug was observed ?
> >
> > In any case, the barrier should be moved in mlx4_en_update_rx_prod_db()
> > I think.
> >
>
> +1 on dma_wmb(), thanks Eric for reviewing this.
>
> The barrier is also needed elsewhere in the code as well, but I wouldn't
> put it in mlx4_en_update_rx_prod_db(), just to allow batch filling of
> all rx rings and then hit the barrier only once. As a rule of thumb, mem
> barriers are the ring API caller responsibility.
>
> e.g. in mlx4_en_activate_rx_rings():
> between mlx4_en_fill_rx_buffers(priv); and the loop that updates rx prod
> for all rings ring, the dma_wmb is needed, see below.
>
> diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
> b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
> index b4d144e67514..65541721a240 100644
> --- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
> +++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
> @@ -370,6 +370,8 @@ int mlx4_en_activate_rx_rings(struct mlx4_en_priv *priv)
> if (err)
> goto err_buffers;
>
> + dma_wmb();
> +
> for (ring_ind = 0; ring_ind < priv->rx_ring_num; ring_ind++) {
> ring = priv->rx_ring[ring_ind];
Why bother, considering dma_wmb() is a nop on x86,
simply a compiler barrier.
Putting it in mlx4_en_update_rx_prod_db() and have no obscure bugs...
Also we might change the existing wmb() in mlx4_en_process_rx_cq() by
dma_wmb(), that would help performance a bit.
next prev parent reply other threads:[~2018-01-12 20:16 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-01-12 3:42 [PATCH] net/mlx4_en: ensure rx_desc updating reaches HW before prod db updating Jianchao Wang
2018-01-12 16:32 ` Jason Gunthorpe
2018-01-12 16:46 ` Eric Dumazet
2018-01-12 19:53 ` Saeed Mahameed
2018-01-12 20:16 ` Eric Dumazet [this message]
2018-01-12 21:01 ` Saeed Mahameed
2018-01-12 21:21 ` Eric Dumazet
2018-01-13 19:15 ` Jason Gunthorpe
2018-01-14 2:40 ` jianchao.wang
2018-01-14 9:47 ` Tariq Toukan
2018-01-15 5:50 ` jianchao.wang
2018-01-19 15:16 ` jianchao.wang
2018-01-19 15:49 ` Eric Dumazet
2018-01-21 9:31 ` Tariq Toukan
2018-01-21 16:24 ` Tariq Toukan
2018-01-21 16:43 ` Eric Dumazet
2018-01-22 2:40 ` jianchao.wang
2018-01-22 15:47 ` Jason Gunthorpe
2018-01-23 3:25 ` jianchao.wang
2018-01-22 2:12 ` jianchao.wang
2018-01-25 3:27 ` jianchao.wang
2018-01-25 3:55 ` Eric Dumazet
2018-01-25 6:25 ` jianchao.wang
2018-01-25 9:54 ` Tariq Toukan
2018-01-27 12:41 ` jianchao.wang
[not found] ` <d9883261-e93e-400a-757c-3a81d8b6aca1@mellanox.com>
2019-01-02 1:43 ` jianchao.wang
2018-01-21 20:40 ` Jason Gunthorpe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1515788191.131759.48.camel@gmail.com \
--to=eric.dumazet@gmail.com \
--cc=jgg@ziepe.ca \
--cc=jianchao.w.wang@oracle.com \
--cc=junxiao.bi@oracle.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=saeedm@mellanox.com \
--cc=tariqt@mellanox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).