All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Roger Pau Monné" <roger.pau@citrix.com>
To: Christopher Clark <christopher.w.clark@gmail.com>
Cc: Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wei.liu2@citrix.com>,
	Ross Philipson <ross.philipson@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jason Andryuk <jandryuk@gmail.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Julien Grall <julien.grall@arm.com>, Tim Deegan <tim@xen.org>,
	Daniel Smith <dpsmith@apertussolutions.com>,
	Rich Persaud <persaur@gmail.com>,
	Paul Durrant <paul.durrant@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	xen-devel@lists.xenproject.org,
	James McKenzie <james@bromium.com>,
	Eric Chanudet <eric.chanudet@gmail.com>
Subject: Re: [PATCH v5 10/15] argo: implement the notify op
Date: Tue, 22 Jan 2019 15:09:07 +0100	[thread overview]
Message-ID: <20190122140907.4puijt6gficuzwbp@mac> (raw)
In-Reply-To: <1548064795-18160-11-git-send-email-christopher.w.clark@gmail.com>

On Mon, Jan 21, 2019 at 01:59:50AM -0800, Christopher Clark wrote:
> Queries for data about space availability in registered rings and
> causes notification to be sent when space has become available.
> 
> The hypercall op populates a supplied data structure with information about
> ring state and if insufficient space is currently available in a given ring,
> the hypervisor will record the domain's expressed interest and notify it
> when it observes that space has become available.
> 
> Checks for free space occur when this notify op is invoked, so it may be
> intentionally invoked with no data structure to populate
> (ie. a NULL argument) to trigger such a check and consequent notifications.
> 
> Limit the maximum number of notify requests in a single operation to a
> simple fixed limit of 256.
> 
> Signed-off-by: Christopher Clark <christopher.clark6@baesystems.com>

LGTM, but I would like to see the open-coded versions of the list_
macros fixed:

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

> diff --git a/xen/common/argo.c b/xen/common/argo.c
> index 518aff7..4b43bdd 100644
> --- a/xen/common/argo.c
> +++ b/xen/common/argo.c
[...]
> +static void
> +pending_notify(struct list_head *to_notify)
> +{
> +    ASSERT(LOCKING_Read_L1);
> +
> +    /* Sending signals for all ents in this list, draining until it is empty. */
> +    while ( !list_empty(to_notify) )
> +    {
> +        struct pending_ent *ent =
> +            list_entry(to_notify->next, struct pending_ent, node);

list_first_entry_or_null

> +
> +        list_del(&ent->node);
> +        signal_domid(ent->domain_id);
> +        xfree(ent);
> +    }
> +}
> +
> +static void
> +pending_find(const struct domain *d, struct argo_ring_info *ring_info,
> +             unsigned int payload_space, struct list_head *to_notify)
> +{
> +    struct list_head *cursor, *pending_head;
> +
> +    ASSERT(LOCKING_Read_rings_L2(d));
> +
> +    /*
> +     * TODO: Current policy here is to signal _all_ of the waiting domains
> +     *       interested in sending a message of size less than payload_space.
> +     *
> +     * This is likely to be suboptimal, since once one of them has added
> +     * their message to the ring, there may well be insufficient room
> +     * available for any of the others to transmit, meaning that they were
> +     * woken in vain, which created extra work just to requeue their wait.
> +     *
> +     * Retain this simple policy for now since it at least avoids starving a
> +     * domain of available space notifications because of a policy that only
> +     * notified other domains instead. Improvement may be possible;
> +     * investigation required.
> +     */
> +    spin_lock(&ring_info->L3_lock);
> +
> +    /* Remove matching ents from the ring list, and add them to "to_notify" */
> +    pending_head = &ring_info->pending;
> +    cursor = pending_head->next;
> +
> +    while ( cursor != pending_head )
> +    {
> +        struct pending_ent *ent = list_entry(cursor, struct pending_ent, node);
> +
> +        cursor = cursor->next;

list_for_each_entry_safe?

> +
> +        if ( payload_space >= ent->len )
> +        {
> +            if ( ring_info->id.partner_id == XEN_ARGO_DOMID_ANY )
> +                wildcard_pending_list_remove(ent->domain_id, ent);
> +
> +            list_del(&ent->node);
> +            ring_info->npending--;
> +            list_add(&ent->node, to_notify);
> +        }
> +    }
> +
> +    spin_unlock(&ring_info->L3_lock);
> +}
> +
>  static int
>  pending_queue(const struct domain *d, struct argo_ring_info *ring_info,
>                domid_t src_id, unsigned int len)
> @@ -1023,6 +1163,36 @@ pending_requeue(const struct domain *d, struct argo_ring_info *ring_info,
>  }
>  
>  static void
> +pending_cancel(const struct domain *d, struct argo_ring_info *ring_info,
> +               domid_t src_id)
> +{
> +    struct list_head *cursor, *pending_head;
> +
> +    ASSERT(LOCKING_L3(d, ring_info));
> +
> +    /* Remove all ents where domain_id matches src_id from the ring's list. */
> +    pending_head = &ring_info->pending;
> +    cursor = pending_head->next;
> +
> +    while ( cursor != pending_head )
> +    {
> +        struct pending_ent *ent = list_entry(cursor, struct pending_ent, node);
> +
> +        cursor = cursor->next;

list_for_each_entry_safe

> +
> +        if ( ent->domain_id == src_id )
> +        {
> +            /* For wildcard rings, remove each from their wildcard list too. */
> +            if ( ring_info->id.partner_id == XEN_ARGO_DOMID_ANY )
> +                wildcard_pending_list_remove(ent->domain_id, ent);
> +            list_del(&ent->node);
> +            xfree(ent);
> +            ring_info->npending--;
> +        }
> +    }
> +}
> +
> +static void
>  wildcard_rings_pending_remove(struct domain *d)
>  {
>      struct list_head *wildcard_head;
> @@ -1158,6 +1328,86 @@ partner_rings_remove(struct domain *src_d)
>  }
>  
>  static int
> +fill_ring_data(const struct domain *currd,
> +               XEN_GUEST_HANDLE(xen_argo_ring_data_ent_t) data_ent_hnd)
> +{
> +    xen_argo_ring_data_ent_t ent;
> +    struct domain *dst_d;
> +    struct argo_ring_info *ring_info;
> +    int ret = 0;
> +
> +    ASSERT(currd == current->domain);
> +    ASSERT(LOCKING_Read_L1);
> +
> +    if ( __copy_from_guest(&ent, data_ent_hnd, 1) )
> +        return -EFAULT;
> +
> +    argo_dprintk("fill_ring_data: ent.ring.domain=%u,ent.ring.aport=%x\n",
> +                 ent.ring.domain_id, ent.ring.aport);
> +
> +    ent.flags = 0;
> +
> +    dst_d = get_domain_by_id(ent.ring.domain_id);
> +    if ( !dst_d || !dst_d->argo )
> +        goto out;
> +
> +    read_lock(&dst_d->argo->rings_L2_rwlock);
> +
> +    ring_info = find_ring_info_by_match(dst_d, ent.ring.aport,
> +                                        currd->domain_id);
> +    if ( ring_info )
> +    {
> +        unsigned int space_avail;
> +
> +        ent.flags |= XEN_ARGO_RING_EXISTS;
> +
> +        spin_lock(&ring_info->L3_lock);
> +
> +        ent.max_message_size = ring_info->len -
> +                                   sizeof(struct xen_argo_ring_message_header) -
> +                                   ROUNDUP_MESSAGE(1);
> +
> +        if ( ring_info->id.partner_id == XEN_ARGO_DOMID_ANY )
> +            ent.flags |= XEN_ARGO_RING_SHARED;
> +
> +        space_avail = ringbuf_payload_space(dst_d, ring_info);
> +
> +        argo_dprintk("fill_ring_data: aport=%x space_avail=%u"
> +                     " space_wanted=%u\n",
> +                     ring_info->id.aport, space_avail, ent.space_required);
> +
> +        /* Do not queue a notification for an unachievable size */
> +        if ( ent.space_required > ent.max_message_size )
> +            ent.flags |= XEN_ARGO_RING_EMSGSIZE;
> +        else if ( space_avail >= ent.space_required )
> +        {
> +            pending_cancel(dst_d, ring_info, currd->domain_id);
> +            ent.flags |= XEN_ARGO_RING_SUFFICIENT;
> +        }
> +        else
> +            ret = pending_requeue(dst_d, ring_info, currd->domain_id,
> +                                  ent.space_required);
> +
> +        spin_unlock(&ring_info->L3_lock);
> +
> +        if ( space_avail == ent.max_message_size )
> +            ent.flags |= XEN_ARGO_RING_EMPTY;
> +
> +    }
> +    read_unlock(&dst_d->argo->rings_L2_rwlock);
> +
> + out:
> +    if ( dst_d )
> +        put_domain(dst_d);
> +
> +    if ( !ret && (__copy_field_to_guest(data_ent_hnd, &ent, flags) ||
> +                  __copy_field_to_guest(data_ent_hnd, &ent, max_message_size)) )
> +        return -EFAULT;
> +
> +    return ret;
> +}
> +
> +static int
>  find_ring_mfn(struct domain *d, gfn_t gfn, mfn_t *mfn)
>  {
>      struct page_info *page;
> @@ -1586,6 +1836,112 @@ register_ring(struct domain *currd,
>      return ret;
>  }
>  
> +static void
> +notify_ring(const struct domain *d, struct argo_ring_info *ring_info,
> +            struct list_head *to_notify)
> +{
> +    unsigned int space;
> +
> +    ASSERT(LOCKING_Read_rings_L2(d));
> +
> +    spin_lock(&ring_info->L3_lock);
> +
> +    if ( ring_info->len )
> +        space = ringbuf_payload_space(d, ring_info);
> +    else
> +        space = 0;
> +
> +    spin_unlock(&ring_info->L3_lock);
> +
> +    if ( space )
> +        pending_find(d, ring_info, space, to_notify);
> +}
> +
> +static void
> +notify_check_pending(struct domain *d)
> +{
> +    unsigned int i;
> +    LIST_HEAD(to_notify);
> +
> +    ASSERT(LOCKING_Read_L1);
> +
> +    read_lock(&d->argo->rings_L2_rwlock);
> +
> +    /* Walk all rings, call notify_ring on each to populate to_notify list */
> +    for ( i = 0; i < ARGO_HASHTABLE_SIZE; i++ )
> +    {
> +        struct list_head *cursor, *bucket = &d->argo->ring_hash[i];
> +        struct argo_ring_info *ring_info;
> +
> +        for ( cursor = bucket->next; cursor != bucket; cursor = cursor->next )

list_for_each_entry

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2019-01-22 14:14 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-01-21  9:59 [PATCH v5 00/15] Argo: hypervisor-mediated interdomain communication Christopher Clark
2019-01-21  9:59 ` [PATCH v5 01/15] argo: Introduce the Kconfig option to govern inclusion of Argo Christopher Clark
2019-01-21  9:59 ` [PATCH v5 02/15] argo: introduce the argo_op hypercall boilerplate Christopher Clark
2019-01-21  9:59 ` [PATCH v5 03/15] argo: define argo_dprintk for subsystem debugging Christopher Clark
2019-01-21  9:59 ` [PATCH v5 04/15] argo: init, destroy and soft-reset, with enable command line opt Christopher Clark
2019-01-21 17:55   ` Roger Pau Monné
2019-01-31  4:06     ` Christopher Clark
2019-01-31 10:28       ` Jan Beulich
2019-01-21  9:59 ` [PATCH v5 05/15] errno: add POSIX error codes EMSGSIZE, ECONNREFUSED to the ABI Christopher Clark
2019-01-21  9:59 ` [PATCH v5 06/15] xen/arm: introduce guest_handle_for_field() Christopher Clark
2019-01-21  9:59 ` [PATCH v5 07/15] argo: implement the register op Christopher Clark
2019-01-22  9:59   ` Roger Pau Monné
2019-01-31  4:08     ` Christopher Clark
2019-01-21  9:59 ` [PATCH v5 08/15] argo: implement the unregister op Christopher Clark
2019-01-22 11:02   ` Roger Pau Monné
2019-01-21  9:59 ` [PATCH v5 09/15] argo: implement the sendv op; evtchn: expose send_guest_global_virq Christopher Clark
2019-01-22 12:08   ` Roger Pau Monné
2019-01-31  4:10     ` Christopher Clark
2019-01-31 10:18       ` Roger Pau Monné
2019-01-31 10:35         ` Jan Beulich
2019-01-31 11:00           ` Roger Pau Monné
2019-02-03 17:56             ` Christopher Clark
2019-02-04  9:30               ` Roger Pau Monné
2019-01-21  9:59 ` [PATCH v5 10/15] argo: implement the notify op Christopher Clark
2019-01-22 14:09   ` Roger Pau Monné [this message]
2019-01-31  4:12     ` Christopher Clark
2019-01-21  9:59 ` [PATCH v5 11/15] xsm, argo: XSM control for argo register Christopher Clark
2019-01-21  9:59 ` [PATCH v5 12/15] xsm, argo: XSM control for argo message send operation Christopher Clark
2019-01-21  9:59 ` [PATCH v5 13/15] xsm, argo: XSM control for any access to argo by a domain Christopher Clark
2019-01-21  9:59 ` [PATCH v5 14/15] xsm, argo: notify: don't describe rings that cannot be sent to Christopher Clark
2019-01-21  9:59 ` [PATCH v5 15/15] MAINTAINERS: add new section for Argo and self as maintainer Christopher Clark
2019-01-21 10:58   ` Wei Liu
2019-01-21 11:07   ` Jan Beulich
2019-01-22 14:17 ` [PATCH v5 00/15] Argo: hypervisor-mediated interdomain communication Roger Pau Monné
2019-01-31  4:05   ` Christopher Clark
2019-01-31 13:39     ` Roger Pau Monné
2019-02-03 18:04       ` Christopher Clark
2019-02-04 10:07         ` Roger Pau Monné
2019-02-04 18:22           ` Stefano Stabellini
2019-02-13 10:31             ` Rich Persaud
2019-02-13 22:19               ` Stefano Stabellini
2019-02-06 10:31           ` Roger Pau Monné
2019-02-06 10:36           ` Roger Pau Monné
2019-02-13 10:43           ` Rich Persaud

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190122140907.4puijt6gficuzwbp@mac \
    --to=roger.pau@citrix.com \
    --cc=George.Dunlap@eu.citrix.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=christopher.w.clark@gmail.com \
    --cc=dpsmith@apertussolutions.com \
    --cc=eric.chanudet@gmail.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=james@bromium.com \
    --cc=jandryuk@gmail.com \
    --cc=jbeulich@suse.com \
    --cc=jgross@suse.com \
    --cc=julien.grall@arm.com \
    --cc=konrad.wilk@oracle.com \
    --cc=paul.durrant@citrix.com \
    --cc=persaur@gmail.com \
    --cc=ross.philipson@gmail.com \
    --cc=sstabellini@kernel.org \
    --cc=tim@xen.org \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.