linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Dongli Zhang <dongli.zhang@oracle.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Cc: jgross@suse.com, wei.liu2@citrix.com, konrad.wilk@oracle.com,
	srinivas.eeda@oracle.com, paul.durrant@citrix.com,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [PATCH 2/6] xenbus: implement the xenwatch multithreading framework
Date: Mon, 17 Sep 2018 09:48:44 +0800	[thread overview]
Message-ID: <1443a6e8-0a94-6081-b1c6-1f426bbaea38@oracle.com> (raw)
In-Reply-To: <affabca4-0683-f088-7b25-d239ff882fa0@oracle.com>

Hi Boris,

On 09/17/2018 05:20 AM, Boris Ostrovsky wrote:
> 
> 
> On 9/14/18 3:34 AM, Dongli Zhang wrote:
>>
>> +
>> +/* Running in the context of default xenwatch kthread. */
>> +void mtwatch_create_domain(domid_t domid)
>> +{
>> +    struct mtwatch_domain *domain;
>> +
>> +    if (!domid) {
>> +        pr_err("Default xenwatch thread is for dom0\n");
>> +        return;
>> +    }
>> +
>> +    spin_lock(&mtwatch_info->domain_lock);
>> +
>> +    domain = mtwatch_find_domain(domid);
>> +    if (domain) {
>> +        atomic_inc(&domain->refcnt);
>> +        spin_unlock(&mtwatch_info->domain_lock);
>> +        return;
>> +    }
>> +
>> +    domain = kzalloc(sizeof(*domain), GFP_ATOMIC);
> 
> Is there a reason (besides this being done under spinlock) for using GFP_ATOMIC?
> If domain_lock is the only reason I'd probably drop the lock and do GFP_KERNEL.

spin_lock is the reason.

Would you like to switch to a mutex here?

> 
> 
>> +    if (!domain) {
>> +        spin_unlock(&mtwatch_info->domain_lock);
>> +        pr_err("Failed to allocate memory for mtwatch thread %d\n",
>> +               domid);
>> +        return;
> 
> This needs to return an error code, I think. Or do you want to fall back to
> shared xenwatch thread?

We would fall back to the shared default xenwatch thread. As in [PATCH 3/6], the
event is dispatched to the shared xenwatch thread if the per-domU one is not
available.

> 
> 
>> +    }
>> +
>> +    domain->domid = domid;
>> +    atomic_set(&domain->refcnt, 1);
>> +    mutex_init(&domain->domain_mutex);
>> +    INIT_LIST_HEAD(&domain->purge_node);
>> +
>> +    init_waitqueue_head(&domain->events_wq);
>> +    spin_lock_init(&domain->events_lock);
>> +    INIT_LIST_HEAD(&domain->events);
>> +
>> +    list_add_tail_rcu(&domain->list_node, &mtwatch_info->domain_list);
>> +
>> +    hlist_add_head_rcu(&domain->hash_node,
>> +               &mtwatch_info->domain_hash[MTWATCH_HASH(domid)]);
>> +
>> +    spin_unlock(&mtwatch_info->domain_lock);
>> +
>> +    domain->task = kthread_run(mtwatch_thread, domain,
>> +                   "xen-mtwatch-%d", domid);
>> +
>> +    if (!domain->task) {
>> +        pr_err("mtwatch kthread creation is failed\n");
>> +        domain->state = MTWATCH_DOMAIN_DOWN;
> 
> 
> Why not clean up right here?

I used to think there might be a race between mtwatch_create_domain() and
mtwatch_put_domain().

Just realized the race is impossible. I will clean up here.

> 
>> +
>> +        return;
>> +    }
>> +
>> +    domain->state = MTWATCH_DOMAIN_UP;
>> +}
>> +
> 
> 
>> +
>>   void unregister_xenbus_watch(struct xenbus_watch *watch)
>>   {
>>       struct xs_watch_event *event, *tmp;
>> @@ -831,6 +1100,9 @@ void unregister_xenbus_watch(struct xenbus_watch *watch)
>>         if (current->pid != xenwatch_pid)
>>           mutex_unlock(&xenwatch_mutex);
>> +
>> +    if (xen_mtwatch && watch->get_domid)
>> +        unregister_mtwatch(watch);
> 
> 
> I may not be understanding the logic flow here, but if we successfully
> removed/unregistered/purged the watch from mtwatch lists, do we still need to
> try removing it from watch_events list below?

Part of original unregister_xenbus_watch() has already removed the pending
events from watch_events before the above added lines of code.


Dongli Zhang

> 
> 
> -boris
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2018-09-17  1:48 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-14  7:34 Introduce xenwatch multithreading (mtwatch) Dongli Zhang
2018-09-14  7:34 ` [PATCH 1/6] xenbus: prepare data structures and parameter for xenwatch multithreading Dongli Zhang
2018-09-14  8:11   ` Paul Durrant
2018-09-14 13:40     ` [Xen-devel] " Dongli Zhang
2018-09-14  8:32   ` Juergen Gross
2018-09-14 13:57     ` [Xen-devel] " Dongli Zhang
2018-09-14 14:10       ` Juergen Gross
2018-09-16 20:17   ` Boris Ostrovsky
2018-09-17  1:20     ` Dongli Zhang
2018-09-17 19:08       ` Boris Ostrovsky
2018-09-25  5:14         ` Dongli Zhang
2018-09-25 20:19           ` Boris Ostrovsky
2018-09-26  2:57             ` [Xen-devel] " Dongli Zhang
2018-09-14  7:34 ` [PATCH 2/6] xenbus: implement the xenwatch multithreading framework Dongli Zhang
2018-09-14  8:45   ` Paul Durrant
2018-09-14 14:09     ` [Xen-devel] " Dongli Zhang
2018-09-14  8:56   ` Juergen Gross
2018-09-16 21:20   ` Boris Ostrovsky
2018-09-17  1:48     ` Dongli Zhang [this message]
2018-09-17 20:00       ` [Xen-devel] " Boris Ostrovsky
2018-09-14  7:34 ` [PATCH 3/6] xenbus: dispatch per-domU watch event to per-domU xenwatch thread Dongli Zhang
2018-09-14  9:01   ` Juergen Gross
2018-09-17 20:09   ` Boris Ostrovsky
2018-09-14  7:34 ` [PATCH 4/6] xenbus: process otherend_watch event at 'state' entry in xenwatch multithreading Dongli Zhang
2018-09-14  9:04   ` Juergen Gross
2018-09-14  7:34 ` [PATCH 5/6] xenbus: process be_watch events " Dongli Zhang
2018-09-14  9:12   ` Juergen Gross
2018-09-14 14:18     ` [Xen-devel] " Dongli Zhang
2018-09-14 14:26       ` Juergen Gross
2018-09-14 14:29         ` Dongli Zhang
2018-09-14 14:44           ` Juergen Gross
2018-09-19  6:15             ` Dongli Zhang
2018-09-19  8:01               ` Juergen Gross
2018-09-19 12:27                 ` Dongli Zhang
2018-09-19 12:44                   ` Juergen Gross
2018-09-14 14:33     ` Dongli Zhang
2018-09-14  7:34 ` [PATCH 6/6] drivers: enable xenwatch multithreading for xen-netback and xen-blkback driver Dongli Zhang
2018-09-14  9:16   ` Juergen Gross
2018-09-14  9:38     ` Wei Liu
2018-09-14  9:56     ` Roger Pau Monné
2018-09-14  8:16 ` Introduce xenwatch multithreading (mtwatch) Paul Durrant
2018-09-14  9:18 ` Juergen Gross

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1443a6e8-0a94-6081-b1c6-1f426bbaea38@oracle.com \
    --to=dongli.zhang@oracle.com \
    --cc=boris.ostrovsky@oracle.com \
    --cc=jgross@suse.com \
    --cc=konrad.wilk@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=paul.durrant@citrix.com \
    --cc=roger.pau@citrix.com \
    --cc=srinivas.eeda@oracle.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).