All of lore.kernel.org
 help / color / mirror / Atom feed
From: Juergen Gross <jgross@suse.com>
To: Dongli Zhang <dongli.zhang@oracle.com>,
	xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Cc: wei.liu2@citrix.com, konrad.wilk@oracle.com,
	srinivas.eeda@oracle.com, paul.durrant@citrix.com,
	boris.ostrovsky@oracle.com, roger.pau@citrix.com
Subject: Re: [PATCH 2/6] xenbus: implement the xenwatch multithreading framework
Date: Fri, 14 Sep 2018 10:56:23 +0200	[thread overview]
Message-ID: <2084a3de-29bc-f255-5134-879f1aeb1329__46229.2114542381$1536915314$gmane$org@suse.com> (raw)
In-Reply-To: <1536910456-13337-3-git-send-email-dongli.zhang@oracle.com>

On 14/09/18 09:34, Dongli Zhang wrote:
> This is the 2nd patch of a (6-patch) patch set.
> 
> This patch implements the xenwatch multithreading framework to create or
> destroy the per-domU xenwatch thread. The xenwatch thread is created or
> destroyed during xenbus device probing or removing (that is,
> xenbus_dev_probe() or xenbus_dev_remove()) if the corresponding pv driver
> has xenwatch multithreading feature enabled. As there is only one single
> per-domU xenwatch thread for each domid, probing the xenbus device for the
> same domid again would not create the thread for the same domid again, but
> only increment the reference count of the thread's mtwatch domain. When a
> xenbus device is removed, the reference count is decremented. The per-domU
> xenwatch thread is destroyed when the reference count of its mtwatch domain
> is zero, that is, al xenbus devices (whose mtwatch feature is enabled) of
> such mtwatch domain are removed.
> 
> Therefore, a domid has its own per-domU xenwatch thread only when it is
> attached with dom0 backend xenbus device whose pv driver has the feature
> enabled. The domid would not have its own xenwatch thread when it is not
> running any mtwatch-enabled xenbus device.
> 
> When a watch (with xenwatch multithreading enabled) is unregistered, we
> will generally traverse all mtwatch domains to remove all inflight pending
> events fired by such watch. However, one optimization in this patch is we
> only need to remove pending events from a specific mtwatch domain when the
> watch is registered for a specific domid, that is, when its owner_id field
> is non-zero.
> 
> Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>
> ---
>  drivers/xen/xenbus/xenbus_probe.c |   6 +
>  drivers/xen/xenbus/xenbus_xs.c    | 273 ++++++++++++++++++++++++++++++++++++++
>  include/xen/xenbus.h              |   3 +
>  3 files changed, 282 insertions(+)
> 
> diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
> index 5b47188..5755596 100644
> --- a/drivers/xen/xenbus/xenbus_probe.c
> +++ b/drivers/xen/xenbus/xenbus_probe.c
> @@ -236,6 +236,9 @@ int xenbus_dev_probe(struct device *_dev)
>  	if (err)
>  		goto fail;
>  
> +	if (xen_mtwatch && drv->use_mtwatch)
> +		mtwatch_create_domain(dev->otherend_id);
> +
>  	err = watch_otherend(dev);
>  	if (err) {
>  		dev_warn(&dev->dev, "watch_otherend on %s failed.\n",
> @@ -263,6 +266,9 @@ int xenbus_dev_remove(struct device *_dev)
>  	if (drv->remove)
>  		drv->remove(dev);
>  
> +	if (xen_mtwatch && drv->use_mtwatch)
> +		mtwatch_put_domain(dev->otherend_id);
> +
>  	free_otherend_details(dev);
>  
>  	xenbus_switch_state(dev, XenbusStateClosed);
> diff --git a/drivers/xen/xenbus/xenbus_xs.c b/drivers/xen/xenbus/xenbus_xs.c
> index 3f137d2..741dc54 100644
> --- a/drivers/xen/xenbus/xenbus_xs.c
> +++ b/drivers/xen/xenbus/xenbus_xs.c
> @@ -108,6 +108,201 @@ static __init int xen_parse_mtwatch(char *arg)
>  }
>  early_param("xen_mtwatch", xen_parse_mtwatch);
>  
> +struct mtwatch_domain *mtwatch_find_domain(domid_t domid)
> +{
> +	struct mtwatch_domain *domain;
> +	int hash = MTWATCH_HASH(domid);
> +	struct hlist_head *hash_head = &mtwatch_info->domain_hash[hash];
> +
> +	hlist_for_each_entry_rcu(domain, hash_head, hash_node) {
> +		if (domain->domid == domid)
> +			return domain;
> +	}
> +
> +	return NULL;
> +}
> +
> +/* per-domU thread for xenwatch multithreading. */
> +static int mtwatch_thread(void *arg)
> +{
> +	struct mtwatch_domain *domain = (struct mtwatch_domain *) arg;
> +	struct list_head *ent;
> +	struct xs_watch_event *event;
> +
> +	domain->pid = current->pid;
> +
> +	for (;;) {
> +		wait_event_interruptible(domain->events_wq,
> +					 !list_empty(&domain->events) ||
> +					 domain->state == MTWATCH_DOMAIN_DOWN);
> +
> +		if (domain->state == MTWATCH_DOMAIN_DOWN &&
> +		    list_empty(&domain->events))
> +			break;
> +
> +		mutex_lock(&domain->domain_mutex);
> +
> +		spin_lock(&domain->events_lock);
> +		ent = domain->events.next;
> +		if (ent != &domain->events)
> +			list_del(ent);
> +		spin_unlock(&domain->events_lock);
> +
> +		if (ent != &domain->events) {
> +			event = list_entry(ent, struct xs_watch_event, list);
> +			event->handle->callback(event->handle, event->path,
> +						event->token);
> +			kfree(event);
> +		}
> +
> +		mutex_unlock(&domain->domain_mutex);
> +	}

This is nearly the same coding as xenwatch_thread(). Why don't you
define a new (sub-)structure containing the needed elements and move
xenwatch_pid, watch_events_waitq, watch_events, xenwatch_mutex,
watch_events_lock into such a structure? Then you could a common
function for both purposes (you'd need to set state for
xenwatch_thread() to DOMAIN_UP and add a callback for testing the
thread end condition).

> +
> +	/*
> +	 * domain->state is already set to MTWATCH_DOMAIN_DOWN (to avoid
> +	 * new event to domain->events) when above for loop breaks, so
> +	 * that there is no requirement to cleanup domain->events again.
> +	 */
> +
> +	spin_lock(&mtwatch_info->domain_lock);
> +	list_del_rcu(&domain->list_node);
> +	spin_unlock(&mtwatch_info->domain_lock);
> +
> +	spin_lock(&mtwatch_info->purge_lock);
> +	list_add(&domain->purge_node, &mtwatch_info->purge_list);
> +	spin_unlock(&mtwatch_info->purge_lock);
> +
> +	schedule_work(&mtwatch_info->purge_work);
> +
> +	return 0;
> +}
> +
> +static void delayed_destroy_domain(struct rcu_head *head)
> +{
> +	struct mtwatch_domain *domain;
> +
> +	domain = container_of(head, struct mtwatch_domain, rcu);
> +	kfree(domain);
> +}
> +
> +static void xen_mtwatch_purge_domain(struct work_struct *work)
> +{
> +	struct mtwatch_domain *domain;
> +	struct list_head *node;
> +
> +	while (!list_empty(&mtwatch_info->purge_list)) {
> +
> +		spin_lock(&mtwatch_info->purge_lock);
> +		node = mtwatch_info->purge_list.next;
> +		if (node != &mtwatch_info->purge_list)
> +			list_del(node);
> +		spin_unlock(&mtwatch_info->purge_lock);
> +
> +		if (node != &mtwatch_info->purge_list) {
> +			domain = list_entry(node, struct mtwatch_domain,
> +					    purge_node);
> +			kthread_stop(domain->task);
> +
> +			call_rcu(&domain->rcu, delayed_destroy_domain);
> +		}
> +	}
> +}
> +
> +/* Running in the context of default xenwatch kthread. */
> +void mtwatch_create_domain(domid_t domid)
> +{
> +	struct mtwatch_domain *domain;
> +
> +	if (!domid) {
> +		pr_err("Default xenwatch thread is for dom0\n");

Should we really exclude dom0? What if a driver domain wants to support
a dom0 based frontend?

> +		return;
> +	}
> +
> +	spin_lock(&mtwatch_info->domain_lock);
> +
> +	domain = mtwatch_find_domain(domid);
> +	if (domain) {
> +		atomic_inc(&domain->refcnt);
> +		spin_unlock(&mtwatch_info->domain_lock);
> +		return;
> +	}
> +
> +	domain = kzalloc(sizeof(*domain), GFP_ATOMIC);
> +	if (!domain) {
> +		spin_unlock(&mtwatch_info->domain_lock);
> +		pr_err("Failed to allocate memory for mtwatch thread %d\n",
> +		       domid);

No alloc error messages, please!

> +		return;
> +	}
> +
> +	domain->domid = domid;
> +	atomic_set(&domain->refcnt, 1);
> +	mutex_init(&domain->domain_mutex);
> +	INIT_LIST_HEAD(&domain->purge_node);
> +
> +	init_waitqueue_head(&domain->events_wq);
> +	spin_lock_init(&domain->events_lock);
> +	INIT_LIST_HEAD(&domain->events);
> +
> +	list_add_tail_rcu(&domain->list_node, &mtwatch_info->domain_list);
> +
> +	hlist_add_head_rcu(&domain->hash_node,
> +			   &mtwatch_info->domain_hash[MTWATCH_HASH(domid)]);
> +
> +	spin_unlock(&mtwatch_info->domain_lock);
> +
> +	domain->task = kthread_run(mtwatch_thread, domain,
> +				   "xen-mtwatch-%d", domid);
> +
> +	if (!domain->task) {
> +		pr_err("mtwatch kthread creation is failed\n");
> +		domain->state = MTWATCH_DOMAIN_DOWN;
> +
> +		return;
> +	}
> +
> +	domain->state = MTWATCH_DOMAIN_UP;
> +}
> +
> +/* Running in the context of default xenwatch kthread. */
> +void mtwatch_put_domain(domid_t domid)
> +{
> +	struct mtwatch_domain *domain;
> +
> +	spin_lock(&mtwatch_info->domain_lock);
> +
> +	domain = mtwatch_find_domain(domid);
> +	if (!domain) {
> +		spin_unlock(&mtwatch_info->domain_lock);
> +		pr_err("mtwatch kthread for domid=%d does not exist\n",
> +		       domid);
> +		return;
> +	}
> +
> +	if (atomic_dec_and_test(&domain->refcnt)) {
> +
> +		hlist_del_rcu(&domain->hash_node);
> +
> +		if (!domain->task) {
> +			/*
> +			 * As the task is failed to initialize during
> +			 * mtwatch_create_domain(), we do not need to wait
> +			 * for the kernel thread to complete.
> +			 */
> +			list_del_rcu(&domain->list_node);
> +			call_rcu(&domain->rcu, delayed_destroy_domain);
> +		} else {
> +			spin_lock(&domain->events_lock);
> +			domain->state = MTWATCH_DOMAIN_DOWN;
> +			spin_unlock(&domain->events_lock);
> +
> +			wake_up(&domain->events_wq);
> +		}
> +	}
> +
> +	spin_unlock(&mtwatch_info->domain_lock);
> +}
> +
>  static void xs_suspend_enter(void)
>  {
>  	spin_lock(&xs_state_lock);
> @@ -793,6 +988,80 @@ int register_xenbus_watch(struct xenbus_watch *watch)
>  }
>  EXPORT_SYMBOL_GPL(register_xenbus_watch);
>  
> +static void __unregister_single_mtwatch(struct xenbus_watch *watch,
> +					struct mtwatch_domain *domain)
> +{
> +	struct xs_watch_event *event, *tmp;
> +
> +	if (current->pid != domain->pid)
> +		mutex_lock(&domain->domain_mutex);
> +
> +	spin_lock(&domain->events_lock);
> +	list_for_each_entry_safe(event, tmp,
> +				 &domain->events, list) {
> +		if (event->handle != watch)
> +			continue;
> +		list_del(&event->list);
> +		kfree(event);
> +	}
> +	spin_unlock(&domain->events_lock);
> +
> +	if (current->pid != domain->pid)
> +		mutex_unlock(&domain->domain_mutex);
> +}
> +
> +static void unregister_single_mtwatch(struct xenbus_watch *watch,
> +				      domid_t domid)
> +{
> +	struct mtwatch_domain *domain;
> +	bool found = false;
> +
> +	rcu_read_lock();
> +
> +	list_for_each_entry_rcu(domain, &mtwatch_info->domain_list,
> +				list_node) {
> +		if (domain->domid == domid) {
> +			found = true;
> +			__unregister_single_mtwatch(watch, domain);
> +		}
> +	}
> +
> +	WARN_ON_ONCE(unlikely(!found));
> +
> +	rcu_read_unlock();
> +}
> +
> +static void unregister_all_mtwatch(struct xenbus_watch *watch)
> +{
> +	struct mtwatch_domain *domain;
> +
> +	rcu_read_lock();
> +
> +	list_for_each_entry_rcu(domain, &mtwatch_info->domain_list,
> +				list_node) {
> +		__unregister_single_mtwatch(watch, domain);
> +	}
> +
> +	rcu_read_unlock();
> +}
> +
> +static void unregister_mtwatch(struct xenbus_watch *watch)
> +{
> +	/*
> +	 * Generally, to unregister a watch. we need to traverse all
> +	 * mtwatch domains to remove all inflight pending watch events for
> +	 * such watch.
> +	 *
> +	 * One exception is we only need to remove pending watch events
> +	 * from a single mtwatch domain when the watch is registered for a
> +	 * specific domid.
> +	 */
> +	if (watch->owner_id)

Again: 0 as a special value isn't a good idea. Maybe use one of the
reserved DOMIDs, like DOMID_SELF?


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  parent reply	other threads:[~2018-09-14  8:56 UTC|newest]

Thread overview: 83+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-14  7:34 Introduce xenwatch multithreading (mtwatch) Dongli Zhang
2018-09-14  7:34 ` [PATCH 1/6] xenbus: prepare data structures and parameter for xenwatch multithreading Dongli Zhang
2018-09-14  8:11   ` Paul Durrant
2018-09-14  8:11   ` Paul Durrant
2018-09-14 13:40     ` Dongli Zhang
2018-09-14 13:40     ` [Xen-devel] " Dongli Zhang
2018-09-14  8:32   ` Juergen Gross
2018-09-14 13:57     ` [Xen-devel] " Dongli Zhang
2018-09-14 14:10       ` Juergen Gross
2018-09-14 14:10       ` [Xen-devel] " Juergen Gross
2018-09-14 13:57     ` Dongli Zhang
2018-09-14  8:32   ` Juergen Gross
2018-09-16 20:17   ` Boris Ostrovsky
2018-09-17  1:20     ` Dongli Zhang
2018-09-17  1:20     ` Dongli Zhang
2018-09-17 19:08       ` Boris Ostrovsky
2018-09-17 19:08       ` Boris Ostrovsky
2018-09-25  5:14         ` Dongli Zhang
2018-09-25  5:14         ` Dongli Zhang
2018-09-25 20:19           ` Boris Ostrovsky
2018-09-26  2:57             ` Dongli Zhang
2018-09-26  2:57             ` [Xen-devel] " Dongli Zhang
2018-09-25 20:19           ` Boris Ostrovsky
2018-09-16 20:17   ` Boris Ostrovsky
2018-09-14  7:34 ` Dongli Zhang
2018-09-14  7:34 ` [PATCH 2/6] xenbus: implement the xenwatch multithreading framework Dongli Zhang
2018-09-14  8:45   ` Paul Durrant
2018-09-14  8:45   ` Paul Durrant
2018-09-14 14:09     ` Dongli Zhang
2018-09-14 14:09     ` [Xen-devel] " Dongli Zhang
2018-09-14  8:56   ` Juergen Gross
2018-09-14  8:56   ` Juergen Gross [this message]
2018-09-16 21:20   ` Boris Ostrovsky
2018-09-16 21:20   ` Boris Ostrovsky
2018-09-17  1:48     ` Dongli Zhang
2018-09-17  1:48     ` [Xen-devel] " Dongli Zhang
2018-09-17 20:00       ` Boris Ostrovsky
2018-09-17 20:00       ` Boris Ostrovsky
2018-09-14  7:34 ` Dongli Zhang
2018-09-14  7:34 ` [PATCH 3/6] xenbus: dispatch per-domU watch event to per-domU xenwatch thread Dongli Zhang
2018-09-14  7:34 ` Dongli Zhang
2018-09-14  9:01   ` Juergen Gross
2018-09-14  9:01   ` Juergen Gross
2018-09-17 20:09   ` Boris Ostrovsky
2018-09-17 20:09   ` Boris Ostrovsky
2018-09-14  7:34 ` [PATCH 4/6] xenbus: process otherend_watch event at 'state' entry in xenwatch multithreading Dongli Zhang
2018-09-14  7:34 ` Dongli Zhang
2018-09-14  9:04   ` Juergen Gross
2018-09-14  9:04   ` Juergen Gross
2018-09-14  7:34 ` [PATCH 5/6] xenbus: process be_watch events " Dongli Zhang
2018-09-14  7:34 ` Dongli Zhang
2018-09-14  9:12   ` Juergen Gross
2018-09-14 14:18     ` [Xen-devel] " Dongli Zhang
2018-09-14 14:26       ` Juergen Gross
2018-09-14 14:29         ` Dongli Zhang
2018-09-14 14:29         ` [Xen-devel] " Dongli Zhang
2018-09-14 14:44           ` Juergen Gross
2018-09-19  6:15             ` Dongli Zhang
2018-09-19  6:15             ` [Xen-devel] " Dongli Zhang
2018-09-19  8:01               ` Juergen Gross
2018-09-19 12:27                 ` Dongli Zhang
2018-09-19 12:44                   ` Juergen Gross
2018-09-19 12:44                   ` [Xen-devel] " Juergen Gross
2018-09-19 12:27                 ` Dongli Zhang
2018-09-19  8:01               ` Juergen Gross
2018-09-14 14:44           ` Juergen Gross
2018-09-14 14:26       ` Juergen Gross
2018-09-14 14:18     ` Dongli Zhang
2018-09-14 14:33     ` [Xen-devel] " Dongli Zhang
2018-09-14 14:33     ` Dongli Zhang
2018-09-14  9:12   ` Juergen Gross
2018-09-14  7:34 ` [PATCH 6/6] drivers: enable xenwatch multithreading for xen-netback and xen-blkback driver Dongli Zhang
2018-09-14  9:16   ` Juergen Gross
2018-09-14  9:16   ` Juergen Gross
2018-09-14  9:38     ` Wei Liu
2018-09-14  9:38     ` Wei Liu
2018-09-14  9:56     ` Roger Pau Monné
2018-09-14  9:56     ` Roger Pau Monné
2018-09-14  7:34 ` Dongli Zhang
2018-09-14  8:16 ` Introduce xenwatch multithreading (mtwatch) Paul Durrant
2018-09-14  8:16 ` Paul Durrant
2018-09-14  9:18 ` Juergen Gross
2018-09-14  9:18 ` Juergen Gross

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='2084a3de-29bc-f255-5134-879f1aeb1329__46229.2114542381$1536915314$gmane$org@suse.com' \
    --to=jgross@suse.com \
    --cc=boris.ostrovsky@oracle.com \
    --cc=dongli.zhang@oracle.com \
    --cc=konrad.wilk@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=paul.durrant@citrix.com \
    --cc=roger.pau@citrix.com \
    --cc=srinivas.eeda@oracle.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.