From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6EC8C47094 for ; Thu, 10 Jun 2021 07:45:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 892D9613D0 for ; Thu, 10 Jun 2021 07:45:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229935AbhFJHqz (ORCPT ); Thu, 10 Jun 2021 03:46:55 -0400 Received: from mail.kernel.org ([198.145.29.99]:55762 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229778AbhFJHqy (ORCPT ); Thu, 10 Jun 2021 03:46:54 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 1269F61285; Thu, 10 Jun 2021 07:44:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1623311098; bh=uVaVAXezmcKR672TdMVYwDUSC9AEQ2C0Kp/upa9s9Tc=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=Zuh7wUq3K25jW9uvEs7TexQSpUpd5koG1M3KhyuPJYYSyxAkeA3I9hbWcPARiI2tf IEFvzSoWZhb+BJf8tHgB3/sDmDDXSDZ6om1fsqjG1T8eXEn/370DmQT0/W1X2jNIp6 2qxE/33IrfPTTwqTb/qAkD/+oO49wj/ZtX08nvv+f7ouHERNgVY53H5aSuAnq3d+PO hUH/2Xp2eN3vPLKbbNoG9NHVtdNmG6RaSSfPHIDEXrJzeXImHpFP+CVzsccBTE0lFn m6FmLcLWYhI6t33mSM1f0zAQUanzjBgjInxu1p5nmcbjFZ5A3DjraYv6x3eZTCOvaG 1ASjecj0+rTzQ== Date: Thu, 10 Jun 2021 10:44:55 +0300 From: Leon Romanovsky To: Christoph Hellwig , Jason Gunthorpe Cc: Doug Ledford , Avihai Horon , linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Bart Van Assche , Tom Talpey , Santosh Shilimkar , Chuck Lever III , Keith Busch , David Laight , Honggang LI , Max Gurtovoy Subject: Re: [PATCH v2 rdma-next] RDMA/mlx5: Enable Relaxed Ordering by default for kernel ULPs Message-ID: References: <20210609125241.GA1347@lst.de> <20210609135924.GA6510@lst.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210609135924.GA6510@lst.de> Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org On Wed, Jun 09, 2021 at 03:59:24PM +0200, Christoph Hellwig wrote: > On Wed, Jun 09, 2021 at 04:53:23PM +0300, Leon Romanovsky wrote: > > Sure, did you have in mind some concrete place? Or will new file in the > > Documentation/infiniband/ folder be good enough too? > > Maybe add a kerneldoc comment for the map_mr_sg() ib_device_ops method? I hope that this hunk from the previous cover letter is good enough. Jason, do you want v3? or you can fold this into v2? diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index 9423e70a881c..aaf63a6643d6 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -2468,6 +2468,13 @@ struct ib_device_ops { enum ib_uverbs_advise_mr_advice advice, u32 flags, struct ib_sge *sg_list, u32 num_sge, struct uverbs_attr_bundle *attrs); + /* + * Kernel users should universally support relaxed ordering (RO), + * as they are designed to read data only after observing the CQE + * and use the DMA API correctly. + * + * Some drivers implicitly enable RO if platform supports it. + */ int (*map_mr_sg)(struct ib_mr *mr, struct scatterlist *sg, int sg_nents, unsigned int *sg_offset); int (*check_mr_status)(struct ib_mr *mr, u32 check_mask,