linux-cxl.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Vikram Sethi <vsethi@nvidia.com>,
	Dan Williams <dan.j.williams@intel.com>
Cc: "linux-cxl@vger.kernel.org" <linux-cxl@vger.kernel.org>,
	"Natu, Mahesh" <mahesh.natu@intel.com>,
	"Rudoff, Andy" <andy.rudoff@intel.com>,
	Jeff Smith <JSMITH@nvidia.com>,
	Mark Hairgrove <mhairgrove@nvidia.com>,
	"jglisse@redhat.com" <jglisse@redhat.com>,
	Linux MM <linux-mm@kvack.org>,
	Linux ACPI <linux-acpi@vger.kernel.org>,
	Anshuman Khandual <anshuman.khandual@arm.com>,
	"alex.williamson@redhat.com" <alex.williamson@redhat.com>,
	Samer El-Haj-Mahmoud <Samer.El-Haj-Mahmoud@arm.com>,
	Shanker Donthineni <sdonthineni@nvidia.com>
Subject: Re: Onlining CXL Type2 device coherent memory
Date: Mon, 2 Nov 2020 18:53:32 +0100	[thread overview]
Message-ID: <2f9fa312-e080-d995-eb82-1ac9e6128a33@redhat.com> (raw)
In-Reply-To: <BL0PR12MB2532D78BF9E62E141AED5EADBD100@BL0PR12MB2532.namprd12.prod.outlook.com>

On 02.11.20 17:17, Vikram Sethi wrote:
> Hi David,
>> From: David Hildenbrand <david@redhat.com>
>> On 31.10.20 17:51, Dan Williams wrote:
>>> On Sat, Oct 31, 2020 at 3:21 AM David Hildenbrand <david@redhat.com> wrote:
>>>>
>>>> On 30.10.20 21:37, Dan Williams wrote:
>>>>> On Wed, Oct 28, 2020 at 4:06 PM Vikram Sethi <vsethi@nvidia.com> wrote:
>>>>>>
>>>>>> Hello,
>>>>>>
>>>>>> I wanted to kick off a discussion on how Linux onlining of CXL [1] type 2
>> device
>>>>>> Coherent memory aka Host managed device memory (HDM) will work for
>> type 2 CXL
>>>>>> devices which are available/plugged in at boot. A type 2 CXL device can be
>> simply
>>>>>> thought of as an accelerator with coherent device memory, that also has a
>>>>>> CXL.cache to cache system memory.
>>>>>>
>>>>>> One could envision that BIOS/UEFI could expose the HDM in EFI memory map
>>>>>> as conventional memory as well as in ACPI SRAT/SLIT/HMAT. However, at
>> least
>>>>>> on some architectures (arm64) EFI conventional memory available at kernel
>> boot
>>>>>> memory cannot be offlined, so this may not be suitable on all architectures.
>>>>>
>>>>> That seems an odd restriction. Add David, linux-mm, and linux-acpi as
>>>>> they might be interested / have comments on this restriction as well.
>>>>>
>>>>
>>>> I am missing some important details.
>>>>
>>>> a) What happens after offlining? Will the memory be remove_memory()'ed?
>>>> Will the device get physically unplugged?
>>>>
> Not always IMO. If the device was getting reset, the HDM memory is going to be
> unavailable while device is reset. Offlining the memory around the reset would

Ouch, that speaks IMHO completely against exposing it as System RAM as 
default.

> be sufficient, but depending if driver had done the add_memory in probe,
> it perhaps would be onerous to have to remove_memory as well before reset,
> and then add it back after reset. I realize you’re saying such a procedure
> would be abusing hotplug framework, and we could perhaps require that memory
> be removed prior to reset, but not clear to me that it *must* be removed for
> correctness.
> 
> Another usecase of offlining without removing HDM could be around
> Virtualization/passing entire device with its memory to a VM. If device was
> being used in the host kernel, and is then unbound, and bound to vfio-pci
> (vfio-cxl?), would we expect vfio-pci to add_memory_driver_managed?

At least for passing through memory to VMs (via KVM), you don't actually 
need struct pages / memory exposed to the buddy via 
add_memory_driver_managed(). Actually, doing that sounds like the wrong 
approach.

E.g., you would "allocate" the memory via devdax/dax_hmat and directly 
map the resulting device into guest address space. At least that's what 
some people are doing with

-- 
Thanks,

David / dhildenb


  reply	other threads:[~2020-11-02 17:53 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-28 23:05 Onlining CXL Type2 device coherent memory Vikram Sethi
2020-10-29 14:50 ` Ben Widawsky
2020-10-30 20:37 ` Dan Williams
2020-10-30 20:59   ` Matthew Wilcox
2020-10-30 23:38     ` Dan Williams
2020-10-30 22:39   ` Vikram Sethi
2020-11-02 17:47     ` Dan Williams
2020-10-31 10:21   ` David Hildenbrand
2020-10-31 16:51     ` Dan Williams
2020-11-02  9:51       ` David Hildenbrand
2020-11-02 16:17         ` Vikram Sethi
2020-11-02 17:53           ` David Hildenbrand [this message]
2020-11-02 18:03             ` Dan Williams
2020-11-02 19:25               ` Vikram Sethi
2020-11-02 19:45                 ` Dan Williams
2020-11-03  3:56                 ` Alistair Popple
2020-11-02 18:34       ` Jonathan Cameron

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2f9fa312-e080-d995-eb82-1ac9e6128a33@redhat.com \
    --to=david@redhat.com \
    --cc=JSMITH@nvidia.com \
    --cc=Samer.El-Haj-Mahmoud@arm.com \
    --cc=alex.williamson@redhat.com \
    --cc=andy.rudoff@intel.com \
    --cc=anshuman.khandual@arm.com \
    --cc=dan.j.williams@intel.com \
    --cc=jglisse@redhat.com \
    --cc=linux-acpi@vger.kernel.org \
    --cc=linux-cxl@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mahesh.natu@intel.com \
    --cc=mhairgrove@nvidia.com \
    --cc=sdonthineni@nvidia.com \
    --cc=vsethi@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).