All of lore.kernel.org
 help / color / mirror / Atom feed
From: Juergen Gross <jgross@suse.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [PATCH 3/3] xen: optimize xenbus driver for multiple concurrent xenstore accesses
Date: Wed, 11 Jan 2017 06:26:52 +0100	[thread overview]
Message-ID: <2fddb09d-742a-e505-70ab-ec9815f93176__49528.0224168257$1484112486$gmane$org@suse.com> (raw)
In-Reply-To: <14bc2980-fbb1-7a49-5308-3097a345e37d@oracle.com>

On 10/01/17 23:56, Boris Ostrovsky wrote:
> 
> 
> 
>> diff --git a/drivers/xen/xenbus/xenbus_xs.c b/drivers/xen/xenbus/xenbus_xs.c
>> index ebc768f..ebdfbee 100644
>> --- a/drivers/xen/xenbus/xenbus_xs.c
>> +++ b/drivers/xen/xenbus/xenbus_xs.c
> 
> 
>> -
>> -static struct xs_handle xs_state;
>> +/*
>> + * Framework to protect suspend/resume handling against normal Xenstore
>> + * message handling:
>> + * During suspend/resume there must be no open transaction and no pending
>> + * Xenstore request.
>> + * New watch events happening in this time can be ignored by firing all watches
>> + * after resume.
>> + */
>> +/* Lock protecting enter/exit critical region. */
>> +static DEFINE_SPINLOCK(xs_state_lock);
>> +/* Wait queue for all callers waiting for critical region to become usable. */
>> +static DECLARE_WAIT_QUEUE_HEAD(xs_state_enter_wq);
>> +/* Wait queue for suspend handling waiting for critical region being empty. */
>> +static DECLARE_WAIT_QUEUE_HEAD(xs_state_exit_wq);
>> +/* Number of users in critical region. */
>> +static unsigned int xs_state_users;
>> +/* Suspend handler waiting or already active? */
>> +static int xs_suspend_active;
> 
> I think these two should be declared next to xs_state _lock since they
> are protected by it. Or maybe even put them into some sort of a state
> struct.

I think placing them near the lock and adding a comment is enough.

>> +
>> +
>> +static bool test_reply(struct xb_req_data *req)
>> +{
>> +	if (req->state == xb_req_state_got_reply || !xenbus_ok())
>> +		return true;
>> +
>> +	/* Make sure to reread req->state each time. */
>> +	cpu_relax();
> 
> I don't think I understand why this is needed.

I need a compiler barrier. Otherwise the compiler read req->state only
once outside the while loop.

>> +
>> +	return false;
>> +}
>> +
> 
> 
>> +static void xs_send(struct xb_req_data *req, struct xsd_sockmsg *msg)
>>  {
>> -	mutex_lock(&xs_state.transaction_mutex);
>> -	atomic_inc(&xs_state.transaction_count);
>> -	mutex_unlock(&xs_state.transaction_mutex);
>> -}
>> +	bool notify;
>>  
>> -static void transaction_end(void)
>> -{
>> -	if (atomic_dec_and_test(&xs_state.transaction_count))
>> -		wake_up(&xs_state.transaction_wq);
>> -}
>> +	req->msg = *msg;
>> +	req->err = 0;
>> +	req->state = xb_req_state_queued;
>> +	init_waitqueue_head(&req->wq);
>>  
>> -static void transaction_suspend(void)
>> -{
>> -	mutex_lock(&xs_state.transaction_mutex);
>> -	wait_event(xs_state.transaction_wq,
>> -		   atomic_read(&xs_state.transaction_count) == 0);
>> -}
>> +	xs_request_enter(req);
>>  
>> -static void transaction_resume(void)
>> -{
>> -	mutex_unlock(&xs_state.transaction_mutex);
>> +	req->msg.req_id = xs_request_id++;
> 
> Is it safe to do this without a lock?

You are right: I should move this to xs_request_enter() inside the
lock. I think I'll let return xs_request_enter() the request id.

>> +
>> +int xenbus_dev_request_and_reply(struct xsd_sockmsg *msg, void *par)
>> +{
>> +	struct xb_req_data *req;
>> +	struct kvec *vec;
>> +
>> +	req = kmalloc(sizeof(*req) + sizeof(*vec), GFP_KERNEL);
> 
> Is there a reason why you are using different flags here?

Yes. This function is always called in user context. No need to be
more restrictive.

>> @@ -263,11 +295,20 @@ static void *xs_talkv(struct xenbus_transaction t,
>>  		      unsigned int num_vecs,
>>  		      unsigned int *len)
>>  {
>> +	struct xb_req_data *req;
>>  	struct xsd_sockmsg msg;
>>  	void *ret = NULL;
>>  	unsigned int i;
>>  	int err;
>>  
>> +	req = kmalloc(sizeof(*req), GFP_NOIO | __GFP_HIGH);
>> +	if (!req)
>> +		return ERR_PTR(-ENOMEM);
>> +
>> +	req->vec = iovec;
>> +	req->num_vecs = num_vecs;
>> +	req->cb = xs_wake_up;
>> +
>>  	msg.tx_id = t.id;
>>  	msg.req_id = 0;
> 
> Is this still needed? You are assigning it in xs_send().

Right. Can be removed.

>> +static int xs_reboot_notify(struct notifier_block *nb,
>> +			    unsigned long code, void *unused)
>>  {
>> -	struct xs_stored_msg *msg;
> 
> 
> 
>> +	struct xb_req_data *req;
>> +
>> +	mutex_lock(&xb_write_mutex);
>> +	list_for_each_entry(req, &xs_reply_list, list)
>> +		wake_up(&req->wq);
>> +	list_for_each_entry(req, &xb_write_list, list)
>> +		wake_up(&req->wq);
> 
> We are waking up waiters here but there is not guarantee that waiting
> threads will have a chance to run, is there?

You are right. But this isn't the point. We want to avoid blocking a
reboot due to some needed thread waiting for xenstore. And this task
is being accomplished here.


Juergen


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2017-01-11  5:26 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-01-06 15:05 [PATCH 0/3] xen: optimize xenbus performance Juergen Gross
2017-01-06 15:05 ` Juergen Gross
2017-01-06 15:05 ` [PATCH 1/3] xen: clean up xenbus internal headers Juergen Gross
2017-01-06 15:05   ` Juergen Gross
2017-01-06 20:52   ` Boris Ostrovsky
2017-01-06 20:52     ` Boris Ostrovsky
2017-01-09  7:07     ` Juergen Gross
2017-01-09  7:07     ` Juergen Gross
2017-01-06 15:05 ` [PATCH 2/3] xen: modify xenstore watch event interface Juergen Gross
2017-01-06 15:05   ` Juergen Gross
2017-01-06 15:38   ` Paul Durrant
2017-01-06 15:38   ` Paul Durrant
2017-01-06 16:29   ` Wei Liu
2017-01-06 16:29   ` Wei Liu
2017-01-06 16:37   ` Roger Pau Monné
2017-01-06 16:37   ` Roger Pau Monné
2017-01-06 21:57   ` Boris Ostrovsky
2017-01-06 21:57     ` Boris Ostrovsky
2017-01-09  7:12     ` Juergen Gross
2017-01-09  7:12     ` Juergen Gross
2017-01-06 15:05 ` [PATCH 3/3] xen: optimize xenbus driver for multiple concurrent xenstore accesses Juergen Gross
2017-01-06 15:05   ` Juergen Gross
2017-01-09 21:17   ` Boris Ostrovsky
2017-01-10  6:18     ` Juergen Gross
2017-01-10  6:18     ` Juergen Gross
2017-01-10 16:36       ` Boris Ostrovsky
2017-01-10 16:38         ` Boris Ostrovsky
2017-01-10 16:38         ` Boris Ostrovsky
2017-01-10 16:46         ` Juergen Gross
2017-01-10 16:46         ` Juergen Gross
2017-01-10 16:36       ` Boris Ostrovsky
2017-01-09 21:17   ` Boris Ostrovsky
2017-01-10 19:17   ` Boris Ostrovsky
2017-01-10 19:17   ` Boris Ostrovsky
2017-01-10 22:56   ` Boris Ostrovsky
2017-01-10 22:56   ` Boris Ostrovsky
2017-01-11  5:26     ` Juergen Gross [this message]
2017-01-11  5:26     ` Juergen Gross
2017-01-11 15:29       ` Boris Ostrovsky
2017-01-11 15:29         ` Boris Ostrovsky
2017-01-11 16:50         ` Juergen Gross
2017-01-11 16:50         ` Juergen Gross

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='2fddb09d-742a-e505-70ab-ec9815f93176__49528.0224168257$1484112486$gmane$org@suse.com' \
    --to=jgross@suse.com \
    --cc=boris.ostrovsky@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.