All of lore.kernel.org
 help / color / mirror / Atom feed
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: Juergen Gross <jgross@suse.com>,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [PATCH v3 3/3] xen: optimize xenbus driver for multiple concurrent xenstore accesses
Date: Tue, 24 Jan 2017 08:47:15 -0500	[thread overview]
Message-ID: <252c5d0b-0195-0a56-e236-21e8da449071__48621.4674696841$1485265711$gmane$org@oracle.com> (raw)
In-Reply-To: <46b54be4-297a-af8f-aac2-f2a080752034@oracle.com>

On 01/23/2017 01:59 PM, Boris Ostrovsky wrote:
> On 01/23/2017 05:09 AM, Juergen Gross wrote:
>> Handling of multiple concurrent Xenstore accesses through xenbus driver
>> either from the kernel or user land is rather lame today: xenbus is
>> capable to have one access active only at one point of time.
>>
>>


This patch appears to break save/restore:

[   39.979281] Freezing user space processes ... (elapsed 0.000 seconds)
done.
[   39.981347] Freezing remaining freezable tasks ... (elapsed 0.000
seconds) done.
[   39.983853] PM: freeze of devices complete after 0.537 msecs
[   39.984955] suspending xenstore...
[  246.751144] INFO: task xenwatch:35 blocked for more than 120 seconds.
[  246.752286]       Not tainted 4.10.0-rc5upstream-00311-g3eda026 #2
[  246.753378] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[  246.754489] xenwatch        D12056    35      2 0x00000000
[  246.755624] Call Trace:
[  246.756732]  __schedule+0x225/0x6f0
[  246.757834]  schedule+0x3c/0xb0
[  246.758918]  xs_suspend+0x84/0xc0
[  246.759983]  ? woken_wake_function+0x10/0x10
[  246.761039]  do_suspend+0x52/0x190
[  246.762079]  ? xenbus_transaction_end+0x2f/0x40
[  246.763097]  shutdown_handler+0xfc/0x130
[  246.764086]  xenwatch_thread+0xaa/0x150
[  246.765076]  ? woken_wake_function+0x10/0x10
[  246.766042]  ? schedule+0x3c/0xb0
[  246.766951]  ? _raw_spin_unlock_irqrestore+0x15/0x20
[  246.767869]  ? xenbus_printf+0xa0/0xa0
[  246.768603]  kthread+0x109/0x140
[  246.769057]  ? __kthread_init_worker+0x30/0x30
[  246.769514]  ret_from_fork+0x2c/0x40
[  246.769963] NMI backtrace for cpu 1
[  246.770422] CPU: 1 PID: 322 Comm: khungtaskd Not tainted
4.10.0-rc5upstream-00311-g3eda026 #2
[  246.770898] Call Trace:
[  246.771366]  dump_stack+0x67/0x98
[  246.771413]  ? x86_vector_alloc_irqs+0x111/0x1a0
[  246.771413]  nmi_cpu_backtrace+0xae/0xb0
[  246.771413]  ? hw_nmi_get_sample_period+0x20/0x20
[  246.771413]  nmi_trigger_cpumask_backtrace+0x126/0x160
[  246.771413]  arch_trigger_cpumask_backtrace+0x14/0x20
[  246.771413]  watchdog+0x3bf/0x470
[  246.771413]  ? reset_hung_task_detector+0x20/0x20
[  246.771413]  ? schedule+0x3c/0xb0
[  246.771413]  ? _raw_spin_unlock_irqrestore+0x15/0x20
[  246.771413]  ? reset_hung_task_detector+0x20/0x20
[  246.771413]  kthread+0x109/0x140
[  246.771413]  ? proc_cap_handler+0x1a0/0x1a0
[  246.771413]  ? __kthread_init_worker+0x30/0x30
[  246.771413]  ? proc_cap_handler+0x1a0/0x1a0
[  246.771413]  ? proc_cap_handler+0x1a0/0x1a0
[  246.771413]  ret_from_fork+0x2c/0x40
[  246.778690] Sending NMI from CPU 1 to CPUs 0:
[  246.779265] NMI backtrace for cpu 0
[  246.779609] CPU: 0 PID: 0 Comm: swapper/0 Not tainted
4.10.0-rc5upstream-00311-g3eda026 #2
[  246.779961] task: ffffffff81e11500 task.stack: ffffffff81e00000
[  246.780257] RIP: e030:xen_hypercall_sched_op+0xa/0x20
[  246.780257] RSP: e02b:ffffffff81e03dd0 EFLAGS: 00000246
[  246.780257] RAX: 0000000000000000 RBX: ffffffff81e11500 RCX:
ffffffff810013aa
[  246.780257] RDX: 0000000000000001 RSI: deadbeefdeadf00d RDI:
deadbeefdeadf00d
[  246.780257] RBP: ffffffff81e03de8 R08: 0100000000000000 R09:
ffff88003f212940
[  246.780257] R10: 0000000000000000 R11: 0000000000000246 R12:
0000000000000000
[  246.780257] R13: 0000000000000000 R14: ffffffff81e11502 R15:
ffffffff81e11500
[  246.780257] FS:  00007f53626e9700(0000) GS:ffff88003f200000(0000)
knlGS:0000000000000000
[  246.780257] CS:  e033 DS: 0000 ES: 0000 CR0: 0000000080050033
[  246.780257] CR2: ffffffffff600400 CR3: 0000000007616000 CR4:
0000000000042660
[  246.780257] Call Trace:
[  246.780257]  ? xen_safe_halt+0x10/0x20
[  246.780257]  default_idle+0x1d/0x100
[  246.780257]  arch_cpu_idle+0xa/0x10
[  246.780257]  default_idle_call+0x1e/0x30
[  246.780257]  do_idle+0x17c/0x250
[  246.780257]  cpu_startup_entry+0x1d/0x20
[  246.780257]  rest_init+0x80/0x90
[  246.780257]  start_kernel+0x483/0x490
[  246.780257]  ? set_init_arg+0x5e/0x5e
[  246.780257]  x86_64_start_reservations+0x2a/0x2c
[  246.780257]  xen_start_kernel+0x51b/0x51d
[  246.780257] Code: cc 51 41 53 b8 1c 00 00 00 0f 05 41 5b 59 c3 cc cc
cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 51 41 53 b8 1d 00 00 00
0f 05 <41> 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc
[  246.952336] Kernel panic - not syncing: hung_task: blocked tasks
[  246.952818] CPU: 1 PID: 322 Comm: khungtaskd Not tainted
4.10.0-rc5upstream-00311-g3eda026 #2
[  246.953318] Call Trace:
[  246.953318]  dump_stack+0x67/0x98
[  246.953318]  panic+0xcd/0x22c
[  246.953318]  ? find_next_bit+0xb/0x10
[  246.953318]  watchdog+0x3cd/0x470
[  246.953318]  ? reset_hung_task_detector+0x20/0x20
[  246.953318]  ? schedule+0x3c/0xb0
[  246.953318]  ? _raw_spin_unlock_irqrestore+0x15/0x20
[  246.953318]  ? reset_hung_task_detector+0x20/0x20
[  246.953318]  kthread+0x109/0x140
[  246.953318]  ? proc_cap_handler+0x1a0/0x1a0
[  246.953318]  ? __kthread_init_worker+0x30/0x30
[  246.953318]  ? proc_cap_handler+0x1a0/0x1a0
[  246.953318]  ? proc_cap_handler+0x1a0/0x1a0
[  246.953318]  ret_from_fork+0x2c/0x40
[  246.953318] Kernel Offset: disabled
-bash-4.1# xl destroy bootstrap-x86_64




-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

      parent reply	other threads:[~2017-01-24 13:47 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-01-23 10:09 [PATCH v3 0/3] xen: optimize xenbus performance Juergen Gross
2017-01-23 10:09 ` [PATCH v3 1/3] xen: clean up xenbus internal headers Juergen Gross
2017-01-23 10:09 ` Juergen Gross
2017-01-23 10:09 ` [PATCH v3 2/3] xen: modify xenstore watch event interface Juergen Gross
2017-01-23 10:09 ` Juergen Gross
2017-01-23 10:09 ` [PATCH v3 3/3] xen: optimize xenbus driver for multiple concurrent xenstore accesses Juergen Gross
2017-01-23 10:09 ` Juergen Gross
2017-01-23 18:59   ` Boris Ostrovsky
2017-01-23 18:59   ` Boris Ostrovsky
2017-01-24 13:47     ` Boris Ostrovsky
2017-01-24 16:23       ` Juergen Gross
2017-01-24 16:23       ` Juergen Gross
2017-01-24 17:17         ` Boris Ostrovsky
2017-01-24 17:17         ` Boris Ostrovsky
2017-02-07 17:51         ` Boris Ostrovsky
2017-02-07 17:51         ` Boris Ostrovsky
2017-02-07 22:39           ` Boris Ostrovsky
2017-02-07 22:39           ` Boris Ostrovsky
2017-02-08  6:21             ` Juergen Gross
2017-02-08  6:21             ` Juergen Gross
2017-01-24 13:47     ` Boris Ostrovsky [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='252c5d0b-0195-0a56-e236-21e8da449071__48621.4674696841$1485265711$gmane$org@oracle.com' \
    --to=boris.ostrovsky@oracle.com \
    --cc=jgross@suse.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.