linux-next.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [next] ton of "scheduling while atomic"
       [not found] <528D3F36.3050403@suse.cz>
@ 2013-11-20 23:14 ` Bjorn Helgaas
  0 siblings, 0 replies; only message in thread
From: Bjorn Helgaas @ 2013-11-20 23:14 UTC (permalink / raw)
  To: Jiri Slaby
  Cc: Alex Duyck, Yinghai Lu, Tejun Heo, Linux kernel mailing list,
	Jiri Slaby, Stephen Rothwell, linux-next list

[+cc Stephen, linux-next]

On Wed, Nov 20, 2013 at 4:01 PM, Jiri Slaby <jslaby@suse.cz> wrote:
> Hi,
>
> I'm unable to boot my virtual machine since commit:
> commit 961da7fb6b220d4ae7ec8cc8feb860f269a177e5
> Author: Alexander Duyck <alexander.h.duyck@intel.com>
> Date:   Mon Nov 18 10:59:59 2013 -0700
>
>     PCI: Avoid unnecessary CPU switch when calling driver .probe() method
>
> A revert of that patch helps.

Thanks, Jiri, and sorry for the inconvenience.

I dropped that from my for-linus branch and force-updated it.  I hope
it's in time to make it for linux-next tomorrow (the new
pci-current/for-linus head is e7cc5cf74544 ("PCI: Remove duplicate
pci_disable_device() from pcie_portdrv_remove()")

I'll look into this more tomorrow.

Bjorn

> This is because I receive a ton of (preempt_disable for .probe seems not
> to be a good idea at all):
> BUG: scheduling while atomic: swapper/0/1/0x00000002
> 3 locks held by swapper/0/1:
>  #0:  (&__lockdep_no_validate__){......}, at: [<ffffffff814000b3>]
> __driver_attach+0x53/0xb0
>  #1:  (&__lockdep_no_validate__){......}, at: [<ffffffff814000c1>]
> __driver_attach+0x61/0xb0
>  #2:  (drm_global_mutex){+.+.+.}, at: [<ffffffff8135d981>]
> drm_dev_register+0x21/0x1f0
> Modules linked in:
> CPU: 1 PID: 1 Comm: swapper/0 Tainted: G        W
> 3.12.0-next-20131120+ #4
> Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
>  ffff88002f512780 ffff88002dcc77d8 ffffffff816a90d4 0000000000000006
>  ffff88002dcc8000 ffff88002dcc77f8 ffffffff816a5879 0000000000000006
>  ffff88002dcc79d0 ffff88002dcc7868 ffffffff816adf5c ffff88002dcc8000
> Call Trace:
>  [<ffffffff816a90d4>] dump_stack+0x4e/0x71
>  [<ffffffff816a5879>] __schedule_bug+0x5c/0x6c
>  [<ffffffff816adf5c>] __schedule+0x7bc/0x820
>  [<ffffffff816ae084>] schedule+0x24/0x70
>  [<ffffffff816ad23d>] schedule_timeout+0x1bd/0x260
>  [<ffffffff810cb33e>] ? mark_held_locks+0xae/0x140
>  [<ffffffff816b344b>] ? _raw_spin_unlock_irq+0x2b/0x50
>  [<ffffffff810cb4d5>] ? trace_hardirqs_on_caller+0x105/0x1d0
>  [<ffffffff816aedf7>] wait_for_completion+0xa7/0x110
>  [<ffffffff810b2da0>] ? try_to_wake_up+0x330/0x330
>  [<ffffffff81404aab>] devtmpfs_create_node+0x11b/0x150
>  [<ffffffff813fd0c6>] device_add+0x1f6/0x5b0
>  [<ffffffff8140bec6>] ? pm_runtime_init+0x106/0x110
>  [<ffffffff813fd499>] device_register+0x19/0x20
>  [<ffffffff813fd58b>] device_create_groups_vargs+0xeb/0x110
>  [<ffffffff813fd5f7>] device_create_vargs+0x17/0x20
>  [<ffffffff813fd62c>] device_create+0x2c/0x30
>  [<ffffffff8135d811>] ? drm_get_minor+0xc1/0x210
>  [<ffffffff81168bd4>] ? kmem_cache_alloc+0xf4/0x100
>  [<ffffffff813611a8>] drm_sysfs_device_add+0x58/0x90
>  [<ffffffff8135d8d8>] drm_get_minor+0x188/0x210
>  [<ffffffff8135dabc>] drm_dev_register+0x15c/0x1f0
>  [<ffffffff8135fb68>] drm_get_pci_dev+0x98/0x150
>  [<ffffffff813f8930>] cirrus_pci_probe+0xa0/0xd0
>  [<ffffffff812b34f4>] pci_device_probe+0xa4/0x120
>  [<ffffffff813ffe86>] driver_probe_device+0x76/0x250
>  [<ffffffff81400103>] __driver_attach+0xa3/0xb0
>  [<ffffffff81400060>] ? driver_probe_device+0x250/0x250
>  [<ffffffff813fe10d>] bus_for_each_dev+0x5d/0xa0
>  [<ffffffff813ff9a9>] driver_attach+0x19/0x20
>  [<ffffffff813ff5af>] bus_add_driver+0x10f/0x210
>  [<ffffffff81cc4355>] ? intel_no_opregion_vbt_callback+0x30/0x30
>  [<ffffffff814007af>] driver_register+0x5f/0x100
>  [<ffffffff81cc4355>] ? intel_no_opregion_vbt_callback+0x30/0x30
>  [<ffffffff812b239f>] __pci_register_driver+0x5f/0x70
>  [<ffffffff8135fd35>] drm_pci_init+0x115/0x130
>  [<ffffffff81cc4355>] ? intel_no_opregion_vbt_callback+0x30/0x30
>  [<ffffffff81cc4387>] cirrus_init+0x32/0x3b
>  [<ffffffff8100032a>] do_one_initcall+0xfa/0x140
>  [<ffffffff81c9efed>] kernel_init_freeable+0x1a5/0x23a
>  [<ffffffff81c9e812>] ? do_early_param+0x8c/0x8c
>  [<ffffffff816a0fc0>] ? rest_init+0xd0/0xd0
>  [<ffffffff816a0fc9>] kernel_init+0x9/0x120
>  [<ffffffff816b423c>] ret_from_fork+0x7c/0xb0
>  [<ffffffff816a0fc0>] ? rest_init+0xd0/0xd0
>
> thanks,
> --
> js
> suse labs

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2013-11-20 23:14 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <528D3F36.3050403@suse.cz>
2013-11-20 23:14 ` [next] ton of "scheduling while atomic" Bjorn Helgaas

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).