Linux-PCI Archive on lore.kernel.org
 help / color / Atom feed
From: Hannes Reinecke <hare@suse.de>
To: Daniel Drake <drake@endlessm.com>, Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Sagi Grimberg <sagi@grimberg.me>,
	Linux PCI <linux-pci@vger.kernel.org>,
	Keith Busch <keith.busch@gmail.com>,
	Keith Busch <kbusch@kernel.org>,
	linux-ide@vger.kernel.org,
	linux-nvme <linux-nvme@lists.infradead.org>,
	Bjorn Helgaas <bhelgaas@google.com>,
	Linux Upstreaming Team <linux@endlessm.com>
Subject: Re: [PATCH] PCI: Add Intel remapped NVMe device support
Date: Tue, 18 Jun 2019 09:46:48 +0200
Message-ID: <06c38b3e-603b-5bae-4959-9965ab40db62@suse.de> (raw)
In-Reply-To: <CAD8Lp47Vu=w+Lj77_vL05JYV1WMog9WX3FHGE+TseFrhcLoTuA@mail.gmail.com>

On 6/14/19 4:26 AM, Daniel Drake wrote:
> On Thu, Jun 13, 2019 at 4:54 PM Christoph Hellwig <hch@lst.de> wrote:
>> So until we get very clear and good documentation from Intel on that
>> I don't think any form of upstream support will fly.  And given that
>> Dan who submitted the original patch can't even talk about this thing
>> any more and apparently got a gag order doesn't really give me confidence
>> any of this will ever work.
> 
> I realise the architecture here seems badly thought out, and the lack
> of a decent spec makes the situation worse, but I'd encourage you to
> reconsider this from the perspectives of:
>  - Are the patches really more ugly than the underlying architecture?
>  - We strive to make Linux work well on common platforms and sometimes
> have to accept that hardware vendors do questionable things & do not
> fully cooperate
>  - It works out of the box on Windows
> 
Actually, there _is_ a register description:

https://www.intel.com/content/dam/www/public/us/en/documents/datasheets/300-series-chipset-pch-datasheet-vol-2.pdf

Look for section 15: Intel RST for PCIe Storage.

That gives you a reasonable description of the various registers etc.
You'll have to translate from Intel-speak, but that should be manageable.

In general, I am _quite_ in favour of having Linux Support for these
kind of devices.
I fully agree the interface is ugly, and badly thought out.
But these devices exist and have been sold for quite some time now, so
there is no way we can influence the design on those boxes.

And we really have come a long way from the the original Linux idea of
"hey, that's weird hardware, let's hack together a driver for it" to the
flat rejection of hardware we don't like.
I, for one, prefer the old way.

Looking at the spec I think that Daniels approach of exposing an
additional PCI device is the correct way of going about it (as the
interface design can be thought of a messed-up SR-IOV interface), but
I'll defer to Bjorn here.
But we really should have a driver to get these boxes rolling.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG Nürnberg)

  parent reply index

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-10  7:44 Daniel Drake
2019-06-10 16:00 ` Keith Busch
2019-06-11  2:46   ` Daniel Drake
2019-06-12 14:32     ` Keith Busch
2019-06-13  8:54       ` Christoph Hellwig
2019-06-14  2:26         ` Daniel Drake
2019-06-14 19:36           ` Keith Busch
2019-06-14 20:05             ` Bjorn Helgaas
2019-06-14 21:05               ` Keith Busch
2019-06-18  7:48                 ` Hannes Reinecke
2019-06-18  7:46           ` Hannes Reinecke [this message]
2019-06-18  8:06             ` Daniel Drake
2019-06-18 15:15               ` Hannes Reinecke
2019-06-19 13:52                 ` Bjorn Helgaas
2019-06-10 21:16 ` Bjorn Helgaas
2019-06-11  3:25   ` Daniel Drake
2019-06-11 19:52     ` Bjorn Helgaas
2019-06-12  3:16       ` Daniel Drake
2019-06-12 13:49         ` Bjorn Helgaas

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=06c38b3e-603b-5bae-4959-9965ab40db62@suse.de \
    --to=hare@suse.de \
    --cc=axboe@kernel.dk \
    --cc=bhelgaas@google.com \
    --cc=drake@endlessm.com \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=keith.busch@gmail.com \
    --cc=linux-ide@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=linux@endlessm.com \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Linux-PCI Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-pci/0 linux-pci/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-pci linux-pci/ https://lore.kernel.org/linux-pci \
		linux-pci@vger.kernel.org
	public-inbox-index linux-pci

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-pci


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git