From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S938661AbdAKQuy (ORCPT ); Wed, 11 Jan 2017 11:50:54 -0500 Received: from mx2.suse.de ([195.135.220.15]:42589 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S938645AbdAKQuv (ORCPT ); Wed, 11 Jan 2017 11:50:51 -0500 Subject: Re: [PATCH 3/3] xen: optimize xenbus driver for multiple concurrent xenstore accesses To: Boris Ostrovsky , linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org References: <20170106150544.10836-1-jgross@suse.com> <20170106150544.10836-4-jgross@suse.com> <14bc2980-fbb1-7a49-5308-3097a345e37d@oracle.com> <2fddb09d-742a-e505-70ab-ec9815f93176@suse.com> <4a9f657b-1cf1-94ff-daf8-c928b644e044@oracle.com> From: Juergen Gross Message-ID: <94edd823-a171-5bbf-9c41-6eeb14e6e111@suse.com> Date: Wed, 11 Jan 2017 17:50:49 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.6.0 MIME-Version: 1.0 In-Reply-To: <4a9f657b-1cf1-94ff-daf8-c928b644e044@oracle.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 11/01/17 16:29, Boris Ostrovsky wrote: > >>>> + >>>> + >>>> +static bool test_reply(struct xb_req_data *req) >>>> +{ >>>> + if (req->state == xb_req_state_got_reply || !xenbus_ok()) >>>> + return true; >>>> + >>>> + /* Make sure to reread req->state each time. */ >>>> + cpu_relax(); >>> I don't think I understand why this is needed. >> I need a compiler barrier. Otherwise the compiler read req->state only >> once outside the while loop. > > > Then barrier() looks the right primitive to use here. cpu_relax(), while > doing what you want, is intended for other purposes. Hmm, yes, this sounds better. >> >>>> + >>>> + return false; >>>> +} >>>> + >>> >>>> +static void xs_send(struct xb_req_data *req, struct xsd_sockmsg *msg) >>>> { >>>> - mutex_lock(&xs_state.transaction_mutex); >>>> - atomic_inc(&xs_state.transaction_count); >>>> - mutex_unlock(&xs_state.transaction_mutex); >>>> -} >>>> + bool notify; >>>> >>>> -static void transaction_end(void) >>>> -{ >>>> - if (atomic_dec_and_test(&xs_state.transaction_count)) >>>> - wake_up(&xs_state.transaction_wq); >>>> -} >>>> + req->msg = *msg; >>>> + req->err = 0; >>>> + req->state = xb_req_state_queued; >>>> + init_waitqueue_head(&req->wq); >>>> >>>> -static void transaction_suspend(void) >>>> -{ >>>> - mutex_lock(&xs_state.transaction_mutex); >>>> - wait_event(xs_state.transaction_wq, >>>> - atomic_read(&xs_state.transaction_count) == 0); >>>> -} >>>> + xs_request_enter(req); >>>> >>>> -static void transaction_resume(void) >>>> -{ >>>> - mutex_unlock(&xs_state.transaction_mutex); >>>> + req->msg.req_id = xs_request_id++; >>> Is it safe to do this without a lock? >> You are right: I should move this to xs_request_enter() inside the >> lock. I think I'll let return xs_request_enter() the request id. > > > Then please move xs_request_id's declaration close to xs_state_lock's > declaration (just like you are going to move the two other state variables) Already done. :-) >> >>>> +static int xs_reboot_notify(struct notifier_block *nb, >>>> + unsigned long code, void *unused) >>>> { >>>> - struct xs_stored_msg *msg; >>> >>> >>>> + struct xb_req_data *req; >>>> + >>>> + mutex_lock(&xb_write_mutex); >>>> + list_for_each_entry(req, &xs_reply_list, list) >>>> + wake_up(&req->wq); >>>> + list_for_each_entry(req, &xb_write_list, list) >>>> + wake_up(&req->wq); >>> We are waking up waiters here but there is not guarantee that waiting >>> threads will have a chance to run, is there? >> You are right. But this isn't the point. We want to avoid blocking a >> reboot due to some needed thread waiting for xenstore. And this task >> is being accomplished here. > > > I think it's worth adding a comment mentioning this. Okay. Juergen