All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Jan Beulich" <JBeulich@suse.com>
To: Tamas K Lengyel <tamas@tklengyel.com>
Cc: Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Tamas K Lengyel <tlengyel@novetta.com>,
	Xen-devel <xen-devel@lists.xenproject.org>,
	Keir Fraser <keir@xen.org>
Subject: Re: [PATCH v3 1/2] x86/mem-sharing: Bulk mem-sharing entire domains
Date: Mon, 19 Oct 2015 03:04:41 -0600	[thread overview]
Message-ID: <5624CE4902000078000AC39C@prv-mh.provo.novell.com> (raw)
In-Reply-To: <CABfawhnqQcdpECGcMcDyUtPDoupZ-6ufKctUOnsRkQbMV0EUcw@mail.gmail.com>

>>> On 16.10.15 at 19:02, <tamas@tklengyel.com> wrote:
> On Fri, Oct 16, 2015 at 12:46 AM, Jan Beulich <JBeulich@suse.com> wrote:
>> >>> On 15.10.15 at 20:09, <tamas@tklengyel.com> wrote:
>> > +                    rc = -EFAULT;
>> > +                else
>> > +                    rc = hypercall_create_continuation(__HYPERVISOR_memory_op,
>> > +                                                       "lh", XENMEM_sharing_op,
>> > +                                                       arg);
>> > +            } else {
>>
>> Coding style.
> 
> Coding style matches the rest of the file.

Sorry, but this is not Xen coding style, and hence if there are other
bad examples in the file, these shouldn't be used as an excuse.

>> > +                mso.u.bulk.applied = atomic_read(&d->shr_pages);
>>
>> Is there no possibility for something else to also modify d->shr_pages
>> in parallel, misguiding the caller when looking at the output field. Also
>> you're not copying this back to guest memory...
> 
> It shouldn't be an issue as I don't really care about how many pages were
> shared by the operation itself, I care about the total number of pages
> shared on the domain. If there were already some pages shared before this
> operation, those should be counted here again. I could of course issue a
> separate getdomaininfo to get this same information, it's just more
> expedient to report it right away instead of having to do a separate call.
> By default shr_pages will be equivalent to the number of pages successfully
> shared during this operation. If someone decides to also do unsharing in
> parallel to this op (..how would that make sense?).. well, that's not
> supported right now so all bets are off from my perspective.

But the field being named "applied" suggests a different meaning.

> Also, we are copying it back to guest memory when the operation finishes
> for all mso. It's not bulk specific, applies to all !rc cases further down.

Oh, okay.

>> > @@ -482,7 +483,16 @@ struct xen_mem_sharing_op {
>> >              uint64_aligned_t client_gfn;    /* IN: the client gfn */
>> >              uint64_aligned_t client_handle; /* IN: handle to the client
>> page */
>> >              domid_t  client_domain; /* IN: the client domain id */
>> > -        } share;
>> > +        } share;
>> > +        struct mem_sharing_op_bulk {         /* OP_BULK_SHARE */
>> > +            domid_t client_domain;           /* IN: the client domain
>> id */
>>
>> Explicit padding here please (albeit I see already from the context
>> that this isn't being done in e.g. the share sub-structure above
>> either).
> 
> Not sure what you mean by explicit padding here. The way it's right now
> matches pretty much what was already in place.

domid_t is a 16-bit value, i.e. there's a gap after this field which
would better be made explicit (as we do in many places elsewhere,
albeit - as said - sadly not in the example visible in patch context
here).

Jan

  reply	other threads:[~2015-10-19  9:05 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-10-15 18:09 [PATCH v3 1/2] x86/mem-sharing: Bulk mem-sharing entire domains Tamas K Lengyel
2015-10-15 18:09 ` [PATCH v3 2/2] tests/mem-sharing: Add bulk option to memshrtool Tamas K Lengyel
2015-10-16  6:46 ` [PATCH v3 1/2] x86/mem-sharing: Bulk mem-sharing entire domains Jan Beulich
2015-10-16 17:02   ` Tamas K Lengyel
2015-10-19  9:04     ` Jan Beulich [this message]
2016-05-12 15:25 Tamas K Lengyel
2016-05-13 12:00 ` Jan Beulich
2016-05-13 14:50   ` Tamas K Lengyel
2016-05-13 15:09     ` Jan Beulich
2016-05-13 15:31       ` Tamas K Lengyel
2016-05-13 16:12         ` Jan Beulich
2016-05-13 16:29           ` Tamas K Lengyel
2016-05-17  7:49             ` Jan Beulich
2016-05-13 15:35       ` Daniel De Graaf
2016-05-13 16:19         ` Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5624CE4902000078000AC39C@prv-mh.provo.novell.com \
    --to=jbeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=george.dunlap@eu.citrix.com \
    --cc=ian.campbell@citrix.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=keir@xen.org \
    --cc=stefano.stabellini@eu.citrix.com \
    --cc=tamas@tklengyel.com \
    --cc=tlengyel@novetta.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.