linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jason Gunthorpe <jgg@mellanox.com>
To: Jens Axboe <axboe@kernel.dk>
Cc: Max Gurtovoy <maxg@mellanox.com>,
	Stephen Rothwell <sfr@canb.auug.org.au>,
	Doug Ledford <dledford@redhat.com>,
	Linux Next Mailing List <linux-next@vger.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Yamin Friedman <yaminf@mellanox.com>,
	Israel Rukshin <israelr@mellanox.com>,
	Christoph Hellwig <hch@lst.de>
Subject: Re: linux-next: manual merge of the block tree with the rdma tree
Date: Tue, 2 Jun 2020 16:09:45 -0300	[thread overview]
Message-ID: <20200602190945.GC65026@mellanox.com> (raw)
In-Reply-To: <8be03d71-9c72-bf88-7fd7-76ec7700474a@kernel.dk>

On Tue, Jun 02, 2020 at 01:02:55PM -0600, Jens Axboe wrote:
> On 6/2/20 1:01 PM, Jason Gunthorpe wrote:
> > On Tue, Jun 02, 2020 at 11:37:26AM +0300, Max Gurtovoy wrote:
> >>
> >> On 6/2/2020 5:56 AM, Stephen Rothwell wrote:
> >>> Hi all,
> >>
> >> Hi,
> >>
> >> This looks good to me.
> >>
> >> Can you share a pointer to the tree so we'll test it in our labs ?
> >>
> >> need to re-test:
> >>
> >> 1. srq per core
> >>
> >> 2. srq per core + T10-PI
> >>
> >> And both will run with shared CQ.
> > 
> > Max, this is too much conflict to send to Linus between your own
> > patches. I am going to drop the nvme part of this from RDMA.
> > 
> > Normally I don't like applying partial series, but due to this tree
> > split, you can send the rebased nvme part through the nvme/block tree
> > at rc1 in two weeks..
> 
> Was going to comment that this is probably how it should have been
> done to begin with. If we have multiple conflicts like that between
> two trees, someone is doing something wrong...

Well, on the other hand having people add APIs in one tree and then
(promised) consumers in another tree later on has proven problematic
in the past. It is best to try to avoid that, but in this case I don't
think Max will have any delay to get the API consumer into nvme in two
weeks.

Jason

  reply	other threads:[~2020-06-02 19:09 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-02  2:56 linux-next: manual merge of the block tree with the rdma tree Stephen Rothwell
2020-06-02  8:37 ` Max Gurtovoy
2020-06-02 10:43   ` Stephen Rothwell
2020-06-02 19:01   ` Jason Gunthorpe
2020-06-02 19:02     ` Jens Axboe
2020-06-02 19:09       ` Jason Gunthorpe [this message]
2020-06-02 21:37         ` Jens Axboe
2020-06-02 22:40           ` Max Gurtovoy
2020-06-02 23:32             ` Jason Gunthorpe
2020-06-03 10:56               ` Max Gurtovoy
  -- strict thread matches above, loose matches on Subject: below --
2020-06-02  2:48 Stephen Rothwell
2018-07-26  3:58 Stephen Rothwell
2018-08-15  1:45 ` Stephen Rothwell
2018-08-15 19:26   ` Jason Gunthorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200602190945.GC65026@mellanox.com \
    --to=jgg@mellanox.com \
    --cc=axboe@kernel.dk \
    --cc=dledford@redhat.com \
    --cc=hch@lst.de \
    --cc=israelr@mellanox.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-next@vger.kernel.org \
    --cc=maxg@mellanox.com \
    --cc=sfr@canb.auug.org.au \
    --cc=yaminf@mellanox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).