From: Boris Ostrovsky <boris.ostrovsky@oracle.com> To: Juergen Gross <jgross@suse.com>, linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org Subject: Re: [PATCH 3/3] xen: optimize xenbus driver for multiple concurrent xenstore accesses Date: Wed, 11 Jan 2017 10:29:01 -0500 [thread overview] Message-ID: <4a9f657b-1cf1-94ff-daf8-c928b644e044@oracle.com> (raw) In-Reply-To: <2fddb09d-742a-e505-70ab-ec9815f93176@suse.com> >>> + >>> + >>> +static bool test_reply(struct xb_req_data *req) >>> +{ >>> + if (req->state == xb_req_state_got_reply || !xenbus_ok()) >>> + return true; >>> + >>> + /* Make sure to reread req->state each time. */ >>> + cpu_relax(); >> I don't think I understand why this is needed. > I need a compiler barrier. Otherwise the compiler read req->state only > once outside the while loop. Then barrier() looks the right primitive to use here. cpu_relax(), while doing what you want, is intended for other purposes. > >>> + >>> + return false; >>> +} >>> + >> >>> +static void xs_send(struct xb_req_data *req, struct xsd_sockmsg *msg) >>> { >>> - mutex_lock(&xs_state.transaction_mutex); >>> - atomic_inc(&xs_state.transaction_count); >>> - mutex_unlock(&xs_state.transaction_mutex); >>> -} >>> + bool notify; >>> >>> -static void transaction_end(void) >>> -{ >>> - if (atomic_dec_and_test(&xs_state.transaction_count)) >>> - wake_up(&xs_state.transaction_wq); >>> -} >>> + req->msg = *msg; >>> + req->err = 0; >>> + req->state = xb_req_state_queued; >>> + init_waitqueue_head(&req->wq); >>> >>> -static void transaction_suspend(void) >>> -{ >>> - mutex_lock(&xs_state.transaction_mutex); >>> - wait_event(xs_state.transaction_wq, >>> - atomic_read(&xs_state.transaction_count) == 0); >>> -} >>> + xs_request_enter(req); >>> >>> -static void transaction_resume(void) >>> -{ >>> - mutex_unlock(&xs_state.transaction_mutex); >>> + req->msg.req_id = xs_request_id++; >> Is it safe to do this without a lock? > You are right: I should move this to xs_request_enter() inside the > lock. I think I'll let return xs_request_enter() the request id. Then please move xs_request_id's declaration close to xs_state_lock's declaration (just like you are going to move the two other state variables) > >>> +static int xs_reboot_notify(struct notifier_block *nb, >>> + unsigned long code, void *unused) >>> { >>> - struct xs_stored_msg *msg; >> >> >>> + struct xb_req_data *req; >>> + >>> + mutex_lock(&xb_write_mutex); >>> + list_for_each_entry(req, &xs_reply_list, list) >>> + wake_up(&req->wq); >>> + list_for_each_entry(req, &xb_write_list, list) >>> + wake_up(&req->wq); >> We are waking up waiters here but there is not guarantee that waiting >> threads will have a chance to run, is there? > You are right. But this isn't the point. We want to avoid blocking a > reboot due to some needed thread waiting for xenstore. And this task > is being accomplished here. I think it's worth adding a comment mentioning this. -boris
WARNING: multiple messages have this Message-ID (diff)
From: Boris Ostrovsky <boris.ostrovsky@oracle.com> To: Juergen Gross <jgross@suse.com>, linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org Subject: Re: [PATCH 3/3] xen: optimize xenbus driver for multiple concurrent xenstore accesses Date: Wed, 11 Jan 2017 10:29:01 -0500 [thread overview] Message-ID: <4a9f657b-1cf1-94ff-daf8-c928b644e044@oracle.com> (raw) In-Reply-To: <2fddb09d-742a-e505-70ab-ec9815f93176@suse.com> >>> + >>> + >>> +static bool test_reply(struct xb_req_data *req) >>> +{ >>> + if (req->state == xb_req_state_got_reply || !xenbus_ok()) >>> + return true; >>> + >>> + /* Make sure to reread req->state each time. */ >>> + cpu_relax(); >> I don't think I understand why this is needed. > I need a compiler barrier. Otherwise the compiler read req->state only > once outside the while loop. Then barrier() looks the right primitive to use here. cpu_relax(), while doing what you want, is intended for other purposes. > >>> + >>> + return false; >>> +} >>> + >> >>> +static void xs_send(struct xb_req_data *req, struct xsd_sockmsg *msg) >>> { >>> - mutex_lock(&xs_state.transaction_mutex); >>> - atomic_inc(&xs_state.transaction_count); >>> - mutex_unlock(&xs_state.transaction_mutex); >>> -} >>> + bool notify; >>> >>> -static void transaction_end(void) >>> -{ >>> - if (atomic_dec_and_test(&xs_state.transaction_count)) >>> - wake_up(&xs_state.transaction_wq); >>> -} >>> + req->msg = *msg; >>> + req->err = 0; >>> + req->state = xb_req_state_queued; >>> + init_waitqueue_head(&req->wq); >>> >>> -static void transaction_suspend(void) >>> -{ >>> - mutex_lock(&xs_state.transaction_mutex); >>> - wait_event(xs_state.transaction_wq, >>> - atomic_read(&xs_state.transaction_count) == 0); >>> -} >>> + xs_request_enter(req); >>> >>> -static void transaction_resume(void) >>> -{ >>> - mutex_unlock(&xs_state.transaction_mutex); >>> + req->msg.req_id = xs_request_id++; >> Is it safe to do this without a lock? > You are right: I should move this to xs_request_enter() inside the > lock. I think I'll let return xs_request_enter() the request id. Then please move xs_request_id's declaration close to xs_state_lock's declaration (just like you are going to move the two other state variables) > >>> +static int xs_reboot_notify(struct notifier_block *nb, >>> + unsigned long code, void *unused) >>> { >>> - struct xs_stored_msg *msg; >> >> >>> + struct xb_req_data *req; >>> + >>> + mutex_lock(&xb_write_mutex); >>> + list_for_each_entry(req, &xs_reply_list, list) >>> + wake_up(&req->wq); >>> + list_for_each_entry(req, &xb_write_list, list) >>> + wake_up(&req->wq); >> We are waking up waiters here but there is not guarantee that waiting >> threads will have a chance to run, is there? > You are right. But this isn't the point. We want to avoid blocking a > reboot due to some needed thread waiting for xenstore. And this task > is being accomplished here. I think it's worth adding a comment mentioning this. -boris _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2017-01-11 15:30 UTC|newest] Thread overview: 42+ messages / expand[flat|nested] mbox.gz Atom feed top 2017-01-06 15:05 [PATCH 0/3] xen: optimize xenbus performance Juergen Gross 2017-01-06 15:05 ` Juergen Gross 2017-01-06 15:05 ` [PATCH 1/3] xen: clean up xenbus internal headers Juergen Gross 2017-01-06 15:05 ` Juergen Gross 2017-01-06 20:52 ` Boris Ostrovsky 2017-01-06 20:52 ` Boris Ostrovsky 2017-01-09 7:07 ` Juergen Gross 2017-01-09 7:07 ` Juergen Gross 2017-01-06 15:05 ` [PATCH 2/3] xen: modify xenstore watch event interface Juergen Gross 2017-01-06 15:05 ` Juergen Gross 2017-01-06 15:38 ` Paul Durrant 2017-01-06 15:38 ` Paul Durrant 2017-01-06 16:29 ` Wei Liu 2017-01-06 16:29 ` Wei Liu 2017-01-06 16:37 ` Roger Pau Monné 2017-01-06 16:37 ` Roger Pau Monné 2017-01-06 21:57 ` Boris Ostrovsky 2017-01-06 21:57 ` Boris Ostrovsky 2017-01-09 7:12 ` Juergen Gross 2017-01-09 7:12 ` Juergen Gross 2017-01-06 15:05 ` [PATCH 3/3] xen: optimize xenbus driver for multiple concurrent xenstore accesses Juergen Gross 2017-01-06 15:05 ` Juergen Gross 2017-01-09 21:17 ` Boris Ostrovsky 2017-01-10 6:18 ` Juergen Gross 2017-01-10 6:18 ` Juergen Gross 2017-01-10 16:36 ` Boris Ostrovsky 2017-01-10 16:38 ` Boris Ostrovsky 2017-01-10 16:38 ` Boris Ostrovsky 2017-01-10 16:46 ` Juergen Gross 2017-01-10 16:46 ` Juergen Gross 2017-01-10 16:36 ` Boris Ostrovsky 2017-01-09 21:17 ` Boris Ostrovsky 2017-01-10 19:17 ` Boris Ostrovsky 2017-01-10 19:17 ` Boris Ostrovsky 2017-01-10 22:56 ` Boris Ostrovsky 2017-01-10 22:56 ` Boris Ostrovsky 2017-01-11 5:26 ` Juergen Gross 2017-01-11 5:26 ` Juergen Gross 2017-01-11 15:29 ` Boris Ostrovsky [this message] 2017-01-11 15:29 ` Boris Ostrovsky 2017-01-11 16:50 ` Juergen Gross 2017-01-11 16:50 ` Juergen Gross
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=4a9f657b-1cf1-94ff-daf8-c928b644e044@oracle.com \ --to=boris.ostrovsky@oracle.com \ --cc=jgross@suse.com \ --cc=linux-kernel@vger.kernel.org \ --cc=xen-devel@lists.xenproject.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.