All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mario Limonciello <mario.limonciello@amd.com>
To: Keith Busch <kbusch@kernel.org>, Jens Axboe <axboe@fb.com>,
	Christoph Hellwig <hch@lst.de>, Sagi Grimberg <sagi@grimberg.me>,
	"Rafael J . Wysocki" <rjw@rjwysocki.net>
Cc: linux-nvme@lists.infradead.org (open list:NVM EXPRESS DRIVER),
	linux-acpi@vger.kernel.org, rrangel@chromium.org,
	david.e.box@linux.intel.com, Shyam-sundar.S-k@amd.com,
	Alexander.Deucher@amd.com, prike.liang@amd.com,
	Mario Limonciello <mario.limonciello@amd.com>
Subject: [PATCH v3 0/2] Improvements to StorageD3Enable
Date: Fri, 28 May 2021 11:01:18 -0500	[thread overview]
Message-ID: <20210528160120.9299-1-mario.limonciello@amd.com> (raw)

A number of AMD based OEM systems have problems coming out of s2idle,
which is rooted in that the NVME device power is cut off during s2idle.

That alone is not a bug - the architecture used on Cezanne, Renoir and
Picasso expects this.

Many of these systems do include the StorageD3Enable property, but it is
located in the PCI device itself not in a root port sibling like on Intel.

Intel confirmed that this during pre-production it was placed there, and
actually for production using the PCI device itself is sufficient.

During the course of discussions on the merits of different approaches it
was mentioned that although originally introduced for NVME devices, the
Microsoft specification makes allusions to non-PCI based ACPI storage
devices as well, so a proposal was created to move this into the ACPI
subsystem.

If at a later time different firmware solutions decide to advertise this
functionality, it may make sense to move out of acpi into a more generic
location.  However both AMD's and Intel's solutions for s2idle also rely
upon calling other ACPI drivers and adopting another solution will require
coming up with alternatives for those as well.

Mario Limonciello (2):
  nvme: Look for StorageD3Enable on companion ACPI device instead
  acpi: Move check for _DSD StorageD3Enable property to acpi

 drivers/acpi/device_pm.c | 24 +++++++++++++++++++
 drivers/nvme/host/pci.c  | 50 +---------------------------------------
 include/linux/acpi.h     |  5 ++++
 3 files changed, 30 insertions(+), 49 deletions(-)

-- 
2.25.1


WARNING: multiple messages have this Message-ID (diff)
From: Mario Limonciello <mario.limonciello@amd.com>
To: Keith Busch <kbusch@kernel.org>, Jens Axboe <axboe@fb.com>,
	Christoph Hellwig <hch@lst.de>, Sagi Grimberg <sagi@grimberg.me>,
	"Rafael J . Wysocki" <rjw@rjwysocki.net>
Cc: linux-nvme@lists.infradead.org (open list:NVM EXPRESS DRIVER),
	linux-acpi@vger.kernel.org, rrangel@chromium.org,
	david.e.box@linux.intel.com, Shyam-sundar.S-k@amd.com,
	Alexander.Deucher@amd.com, prike.liang@amd.com,
	Mario Limonciello <mario.limonciello@amd.com>
Subject: [PATCH v3 0/2] Improvements to StorageD3Enable
Date: Fri, 28 May 2021 11:01:18 -0500	[thread overview]
Message-ID: <20210528160120.9299-1-mario.limonciello@amd.com> (raw)

A number of AMD based OEM systems have problems coming out of s2idle,
which is rooted in that the NVME device power is cut off during s2idle.

That alone is not a bug - the architecture used on Cezanne, Renoir and
Picasso expects this.

Many of these systems do include the StorageD3Enable property, but it is
located in the PCI device itself not in a root port sibling like on Intel.

Intel confirmed that this during pre-production it was placed there, and
actually for production using the PCI device itself is sufficient.

During the course of discussions on the merits of different approaches it
was mentioned that although originally introduced for NVME devices, the
Microsoft specification makes allusions to non-PCI based ACPI storage
devices as well, so a proposal was created to move this into the ACPI
subsystem.

If at a later time different firmware solutions decide to advertise this
functionality, it may make sense to move out of acpi into a more generic
location.  However both AMD's and Intel's solutions for s2idle also rely
upon calling other ACPI drivers and adopting another solution will require
coming up with alternatives for those as well.

Mario Limonciello (2):
  nvme: Look for StorageD3Enable on companion ACPI device instead
  acpi: Move check for _DSD StorageD3Enable property to acpi

 drivers/acpi/device_pm.c | 24 +++++++++++++++++++
 drivers/nvme/host/pci.c  | 50 +---------------------------------------
 include/linux/acpi.h     |  5 ++++
 3 files changed, 30 insertions(+), 49 deletions(-)

-- 
2.25.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

             reply	other threads:[~2021-05-28 16:01 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-28 16:01 Mario Limonciello [this message]
2021-05-28 16:01 ` [PATCH v3 0/2] Improvements to StorageD3Enable Mario Limonciello
2021-06-02  5:55 ` Julian Sikorski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210528160120.9299-1-mario.limonciello@amd.com \
    --to=mario.limonciello@amd.com \
    --cc=Alexander.Deucher@amd.com \
    --cc=Shyam-sundar.S-k@amd.com \
    --cc=axboe@fb.com \
    --cc=david.e.box@linux.intel.com \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=linux-acpi@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=prike.liang@amd.com \
    --cc=rjw@rjwysocki.net \
    --cc=rrangel@chromium.org \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.