linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: Dongli Zhang <dongli.zhang@oracle.com>,
	xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Cc: jgross@suse.com, paul.durrant@citrix.com, wei.liu2@citrix.com,
	konrad.wilk@oracle.com, roger.pau@citrix.com,
	srinivas.eeda@oracle.com
Subject: Re: [PATCH 1/6] xenbus: prepare data structures and parameter for xenwatch multithreading
Date: Mon, 17 Sep 2018 15:08:22 -0400	[thread overview]
Message-ID: <68418036-ae16-b58c-71d8-bb177fb30b51@oracle.com> (raw)
In-Reply-To: <797673a8-fa7e-0bfc-332e-4e0190c8d1ed@oracle.com>

On 9/16/18 9:20 PM, Dongli Zhang wrote:
> Hi Boris,
>
> On 09/17/2018 04:17 AM, Boris Ostrovsky wrote:
>>
>> On 9/14/18 3:34 AM, Dongli Zhang wrote:
>>
>>> +
>>> +struct mtwatch_info {
>>> +    /*
>>> +     * The mtwatch_domain is put on both a hash table and a list.
>>> +     * domain_list is used to optimize xenbus_watch un-registration.
>>> +     *
>>> +     * The mtwatch_domain is removed from domain_hash (with state set
>>> +     * to MTWATCH_DOMAIN_DOWN) when its refcnt is zero. However, it is
>>> +     * left on domain_list until all events belong to such
>>> +     * mtwatch_domain are processed in mtwatch_thread().
>>
>> Do we really need to keep mwatch_domain on both lists? Why is keeping it on,
>> say, only the hash not sufficient?
> In the state of the art xenbus, when a watch is unregistered (e.g.,
> unregister_xenbus_watch()), we need to traverse the list 'watch_events' to
> remove all inflight/pending events (for such watch) from 'watch_events'.
>
> About this patch set, as each domain would have its own event list, we need to
> traverse the list of each domain to remove the pending events for the watch to
> be unregistered.
>
> E.g.,
> unregister_xenbus_watch()-->unregister_mtwatch()-->unregister_all_mtwatch() in
> [PATCH 2/6] xenbus: implement the xenwatch multithreading framework.
>
> To traverse a hash table is not as efficient as traversing a list. That's why a
> domain is kept on both the hash table and list.


Keeping the same object on two lists also has costs. More importantly
IMO, it increases chances on introducing a bug  when people update one
instance but not the other.


>
>>> +     *
>>> +     * While there may exist two mtwatch_domain with the same domid on
>>> +     * domain_list simultaneously,
>>
>> How is it possible to have two domids on the list at the same time? Don't you
>> want to remove it (which IIUIC means moving it to the purge list) when domain is
>> destroyed?
> Here is one case (suppose the domid/frontend-id is 9):
>
> 1. Suppose the last pv driver device is removed from domid=9, and therefore the
> reference count of per-domU xenwatch thread for domid=9 (which we call as old
> thread below) should be removed. We remove it from hash table (it is left in the
> list).
>
> Here we cannot remove the domain from the list immediately because there might
> be pending events being processed by the corresponding per-domU xenwatch thread.
> If we remove it from the list while there is related watch being unregistered as
> mentioned for last question, we may hit page fault when processing watch event.


Don't you need to grab domain->domain_mutex to remove the driver?
Meaning that events for that mtwatch thread cannot be processed?

In any case, I think that having two mtwatch_domains for the same domain
should be avoided. (and if you keep the hash list only then this issue
gets resolved automatically ;-))


-boris


>
> 2. Now the administrator attaches new pv device to domid=9 immediately and
> therefore reference count is initially set to 1. The per-domU xenwatch thread
> for domid=9 (called new thread) is created again. It is inserted to both the
> hash table and list.
>
> 3. As the old thread for domid=9 might still be on the list, we would have two
> threads for domid=9 (one old one to be removed and one newly inserted one to be
> used by new pv devices).
>
> Dongli Zhang
>
>>
>> -boris
>>
>>
>>> +      *  all mtwatch_domain on hash_hash
>>> +     * should have unique domid.
>>> +     */
>>> +    spinlock_t domain_lock;
>>> +    struct hlist_head domain_hash[MTWATCH_HASH_SIZE];
>>> +    struct list_head domain_list;
>>> +
>>> +    /*
>>> +     * When a per-domU kthread is going to be destroyed, it is put
>>> +     * on the purge_list, and will be flushed by purge_work later.
>>> +     */
>>> +    struct work_struct purge_work;
>>> +    spinlock_t purge_lock;
>>> +    struct list_head purge_list;
>>> +};
>>


  reply	other threads:[~2018-09-17 19:07 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-14  7:34 Introduce xenwatch multithreading (mtwatch) Dongli Zhang
2018-09-14  7:34 ` [PATCH 1/6] xenbus: prepare data structures and parameter for xenwatch multithreading Dongli Zhang
2018-09-14  8:11   ` Paul Durrant
2018-09-14 13:40     ` [Xen-devel] " Dongli Zhang
2018-09-14  8:32   ` Juergen Gross
2018-09-14 13:57     ` [Xen-devel] " Dongli Zhang
2018-09-14 14:10       ` Juergen Gross
2018-09-16 20:17   ` Boris Ostrovsky
2018-09-17  1:20     ` Dongli Zhang
2018-09-17 19:08       ` Boris Ostrovsky [this message]
2018-09-25  5:14         ` Dongli Zhang
2018-09-25 20:19           ` Boris Ostrovsky
2018-09-26  2:57             ` [Xen-devel] " Dongli Zhang
2018-09-14  7:34 ` [PATCH 2/6] xenbus: implement the xenwatch multithreading framework Dongli Zhang
2018-09-14  8:45   ` Paul Durrant
2018-09-14 14:09     ` [Xen-devel] " Dongli Zhang
2018-09-14  8:56   ` Juergen Gross
2018-09-16 21:20   ` Boris Ostrovsky
2018-09-17  1:48     ` [Xen-devel] " Dongli Zhang
2018-09-17 20:00       ` Boris Ostrovsky
2018-09-14  7:34 ` [PATCH 3/6] xenbus: dispatch per-domU watch event to per-domU xenwatch thread Dongli Zhang
2018-09-14  9:01   ` Juergen Gross
2018-09-17 20:09   ` Boris Ostrovsky
2018-09-14  7:34 ` [PATCH 4/6] xenbus: process otherend_watch event at 'state' entry in xenwatch multithreading Dongli Zhang
2018-09-14  9:04   ` Juergen Gross
2018-09-14  7:34 ` [PATCH 5/6] xenbus: process be_watch events " Dongli Zhang
2018-09-14  9:12   ` Juergen Gross
2018-09-14 14:18     ` [Xen-devel] " Dongli Zhang
2018-09-14 14:26       ` Juergen Gross
2018-09-14 14:29         ` Dongli Zhang
2018-09-14 14:44           ` Juergen Gross
2018-09-19  6:15             ` Dongli Zhang
2018-09-19  8:01               ` Juergen Gross
2018-09-19 12:27                 ` Dongli Zhang
2018-09-19 12:44                   ` Juergen Gross
2018-09-14 14:33     ` Dongli Zhang
2018-09-14  7:34 ` [PATCH 6/6] drivers: enable xenwatch multithreading for xen-netback and xen-blkback driver Dongli Zhang
2018-09-14  9:16   ` Juergen Gross
2018-09-14  9:38     ` Wei Liu
2018-09-14  9:56     ` Roger Pau Monné
2018-09-14  8:16 ` Introduce xenwatch multithreading (mtwatch) Paul Durrant
2018-09-14  9:18 ` Juergen Gross

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=68418036-ae16-b58c-71d8-bb177fb30b51@oracle.com \
    --to=boris.ostrovsky@oracle.com \
    --cc=dongli.zhang@oracle.com \
    --cc=jgross@suse.com \
    --cc=konrad.wilk@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=paul.durrant@citrix.com \
    --cc=roger.pau@citrix.com \
    --cc=srinivas.eeda@oracle.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).