All of lore.kernel.org
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Paolo Bonzini <pbonzini@redhat.com>, qemu-devel@nongnu.org
Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>,
	Wei Yang <richard.weiyang@linux.alibaba.com>,
	"Michael S . Tsirkin" <mst@redhat.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	"Dr . David Alan Gilbert" <dgilbert@redhat.com>,
	Auger Eric <eric.auger@redhat.com>,
	Pankaj Gupta <pankaj.gupta@cloud.ionos.com>,
	teawater <teawaterz@linux.alibaba.com>,
	Igor Mammedov <imammedo@redhat.com>,
	Marek Kedzierski <mkedzier@redhat.com>
Subject: Re: [PATCH v6 01/12] memory: Introduce RamDiscardMgr for RAM memory regions
Date: Mon, 22 Feb 2021 15:53:45 +0100	[thread overview]
Message-ID: <24562156-457f-90b5-dcaf-c55fba1e881b@redhat.com> (raw)
In-Reply-To: <a6f7de7a-72c3-a6c6-0a14-3e21a0cc833b@redhat.com>

On 22.02.21 15:18, Paolo Bonzini wrote:
> On 22/02/21 15:03, David Hildenbrand wrote:
>>
>>>> +    /**
>>>> +     * @replay_populated:
>>>> +     *
>>>> +     * Notify the #RamDiscardListener about all populated parts
>>>> within the
>>>> +     * #MemoryRegion via the #RamDiscardMgr.
>>>> +     *
>>>> +     * In case any notification fails, no further notifications are
>>>> triggered.
>>>> +     * The listener is not required to be registered.
>>>> +     *
>>>> +     * @rdm: the #RamDiscardMgr
>>>> +     * @mr: the #MemoryRegion
>>>> +     * @rdl: the #RamDiscardListener
>>>> +     *
>>>> +     * Returns 0 on success, or a negative error if any notification
>>>> failed.
>>>> +     */
>>>> +    int (*replay_populated)(const RamDiscardMgr *rdm, const
>>>> MemoryRegion *mr,
>>>> +                            RamDiscardListener *rdl);
>>>
>>> If this function is only going to use a single callback, just pass it
>>> (together with a void *opaque) as the argument.
>>>> +};
>>>> +
>>>>    typedef struct CoalescedMemoryRange CoalescedMemoryRange;
>>>>    typedef struct MemoryRegionIoeventfd MemoryRegionIoeventfd;
>>>> @@ -487,6 +683,7 @@ struct MemoryRegion {
>>>>        const char *name;
>>>>        unsigned ioeventfd_nb;
>>>>        MemoryRegionIoeventfd *ioeventfds;
>>>> +    RamDiscardMgr *rdm; /* Only for RAM */
>>>>    };
>>>
>>>
>>> The idea of sending discard notifications is obviously good.  I have a
>>> couple of questions on the design that you used for the interface; I'm
>>> not saying that it should be done differently, I would only like to
>>> understand the trade offs that you chose:
>>
>> Sure!
>>
>>>
>>> 1) can the RamDiscardManager (no abbreviations :)) be just the owner of
>>
>> I used to call it "SparseRamManager", but wanted to stress the semantics
>> - and can use RamDiscardManager ;) . Suggestions welcome.
>>
>>> the memory region (obj->parent)?  Alternatively, if you want to make it
>>> separate from the owner, does it make sense for it to be a separate
>>> reusable object, sitting between virtio-mem and the MemoryRegion, so
>>> that the implementation can be reused?
>>
>> virtio-mem consumes a memory backend (e.g., memory-backend-ram). That
>> one logically "ownes" the memory region (and thereby the RAMBlock).
>>
>> The memory backend gets assigned to virtio-mem. At that point,
>> virtio-mem "owns" the memory backend. It will set itself as the
>> RamDiscardsManager before mapping the memory region into system address
>> space (whereby it will get exposed to the system).
>>
>> This flow made sense to me. Regarding "reusable object" - I think the
>> only stuff we could fit in there would be e.g., maintaining the lists of
>> notifiers. I'd rather wait until we actually have a second user to see
>> what we can factor out.
>>
>> If you have any suggestion/preference, please let me know.
>>
>>>
>>> 2) why have the new RamDiscardListener instead of just extending
>>> MemoryListener? Conveniently, vfio already has a MemoryListener that can
>>
>> It behaves more like the IOMMU notifier in that regard.
> 
> Yes, but does it behave more like the IOMMU notifier in other regards?
> :)  The IOMMU notifier is concerned with an iova concept that doesn't
> exist at the MemoryRegion level, while RamDiscardListener works at the
> (MemoryRegion, offset) level that can easily be represented by a
> MemoryRegionSection.  Using MemoryRegionSection might even simplify the
> listener code.

It's similarly concerned with rather small, lightweight updates I would say.

> 
>>> be extended, and you wouldn't need the list of RamDiscardListeners.
>>> There is already a precedent of replaying the current state when a
>>> listener is added (see listener_add_address_space), so this is not
>>> something different between ML and RDL.
>>
>> The main motivation is to let listener decide how it wants to handle the
>> memory region. For example, for vhost, vdpa, kvm, ... I only want a
>> single region, not separate ones for each and every populated range,
>> punching out discarded ranges. Note that there are cases (i.e.,
>> anonymous memory), where it's even valid for the guest to read discarded
>> memory.
> 
> Yes, I agree with that.  You would still have the same
> region-add/region_nop/region_del callbacks for KVM and friends; on top
> of that you would have region_populate/region_discard callbacks for VFIO.

I see roughly how it could work, however, I am not sure yet if this is 
the right approach.

I think instead of region_populate/region_discard we would want 
individual region_add/region_del when populating/discarding for all 
MemoryListeners that opt-in somehow (e.g., VFIO, dump-guest-memory, 
...). Similarly, we would want to call log_sync()/log_clear() then only 
for these parts.

But what happens when I populate/discard some memory? I don't want to 
trigger an address space transaction (begin()...region_nop()...commit()) 
- whenever I populate/discard memory (e.g., in 2 MB granularity). 
Especially not, if nothing might have changed for most other 
MemoryListeners.

> 
> Populated regions would be replayed after region_add, while I don't
> think it makes sense to have a region_discard_all callback before
> region_discard.

How would we handle vfio_listerner_log_sync()vfio_sync_dirty_bitmap()?

-- 
Thanks,

David / dhildenb



  reply	other threads:[~2021-02-22 14:54 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-22 11:56 [PATCH v6 00/12] virtio-mem: vfio support David Hildenbrand
2021-02-22 11:56 ` [PATCH v6 01/12] memory: Introduce RamDiscardMgr for RAM memory regions David Hildenbrand
2021-02-22 13:27   ` Paolo Bonzini
2021-02-22 14:03     ` David Hildenbrand
2021-02-22 14:18       ` Paolo Bonzini
2021-02-22 14:53         ` David Hildenbrand [this message]
2021-02-22 17:37           ` Paolo Bonzini
2021-02-22 17:48             ` David Hildenbrand
2021-02-22 19:43             ` David Hildenbrand
2021-02-23 10:50               ` David Hildenbrand
2021-02-23 15:03                 ` Paolo Bonzini
2021-02-23 15:09                   ` David Hildenbrand
2021-02-22 11:56 ` [PATCH v6 02/12] virtio-mem: Factor out traversing unplugged ranges David Hildenbrand
2021-02-22 11:56 ` [PATCH v6 03/12] virtio-mem: Don't report errors when ram_block_discard_range() fails David Hildenbrand
2021-02-22 11:57 ` [PATCH v6 04/12] virtio-mem: Implement RamDiscardMgr interface David Hildenbrand
2021-02-22 11:57 ` [PATCH v6 05/12] vfio: Support for RamDiscardMgr in the !vIOMMU case David Hildenbrand
2021-02-22 13:20   ` Paolo Bonzini
2021-02-22 14:43     ` David Hildenbrand
2021-02-22 17:29       ` Paolo Bonzini
2021-02-22 17:34         ` David Hildenbrand
2021-02-22 11:57 ` [PATCH v6 06/12] vfio: Query and store the maximum number of possible DMA mappings David Hildenbrand
2021-02-22 11:57 ` [PATCH v6 07/12] vfio: Sanity check maximum number of DMA mappings with RamDiscardMgr David Hildenbrand
2021-02-22 11:57 ` [PATCH v6 08/12] vfio: Support for RamDiscardMgr in the vIOMMU case David Hildenbrand
2021-02-22 11:57 ` [PATCH v6 09/12] softmmu/physmem: Don't use atomic operations in ram_block_discard_(disable|require) David Hildenbrand
2021-02-22 13:14   ` Paolo Bonzini
2021-02-22 13:33     ` David Hildenbrand
2021-02-22 14:02       ` Paolo Bonzini
2021-02-22 15:38         ` David Hildenbrand
2021-02-22 17:32           ` Paolo Bonzini
2021-02-23  9:02             ` David Hildenbrand
2021-02-23 15:02               ` Paolo Bonzini
2021-02-22 11:57 ` [PATCH v6 10/12] softmmu/physmem: Extend ram_block_discard_(require|disable) by two discard types David Hildenbrand
2021-02-22 11:57 ` [PATCH v6 11/12] virtio-mem: Require only coordinated discards David Hildenbrand
2021-02-22 11:57 ` [PATCH v6 12/12] vfio: Disable only uncoordinated discards for VFIO_TYPE1 iommus David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=24562156-457f-90b5-dcaf-c55fba1e881b@redhat.com \
    --to=david@redhat.com \
    --cc=alex.williamson@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=eric.auger@redhat.com \
    --cc=imammedo@redhat.com \
    --cc=mkedzier@redhat.com \
    --cc=mst@redhat.com \
    --cc=pankaj.gupta.linux@gmail.com \
    --cc=pankaj.gupta@cloud.ionos.com \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=richard.weiyang@linux.alibaba.com \
    --cc=teawaterz@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.