linux-riscv.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* Kernel panic - not syncing: corrupted stack end detected inside scheduler
@ 2018-11-19 11:23 Andreas Schwab
  2018-11-19 11:23 ` Andreas Schwab
  2018-11-19 23:46 ` Palmer Dabbelt
  0 siblings, 2 replies; 10+ messages in thread
From: Andreas Schwab @ 2018-11-19 11:23 UTC (permalink / raw)
  To: linux-riscv

Could this be a stack overflow?

[ 2427.690000] Kernel panic - not syncing: corrupted stack end detected inside scheduler
[ 2427.690000]
[ 2427.690000] CPU: 1 PID: 3540 Comm: kworker/u8:2 Not tainted 4.19.0-00014-g978b77fe75 #6
[ 2427.690000] Workqueue: writeback wb_workfn (flush-179:0)
[ 2427.690000] Call Trace:
[ 2427.690000] [<ffffffe000c867d4>] walk_stackframe+0x0/0xa4
[ 2427.690000] [<ffffffe000c869d4>] show_stack+0x2a/0x34
[ 2427.690000] [<ffffffe0011a8800>] dump_stack+0x62/0x7c
[ 2427.690000] [<ffffffe000c8b542>] panic+0xd2/0x1f0
[ 2427.690000] [<ffffffe0011bb25c>] schedule+0x0/0x58
[ 2427.690000] [<ffffffe0011bb470>] preempt_schedule_common+0xe/0x1e
[ 2427.690000] [<ffffffe0011bb4b4>] _cond_resched+0x34/0x40
[ 2427.690000] [<ffffffe001025694>] __spi_pump_messages+0x29e/0x40e
[ 2427.690000] [<ffffffe001025986>] __spi_sync+0x168/0x16a
[ 2427.690000] [<ffffffe001025b86>] spi_sync_locked+0xc/0x14
[ 2427.690000] [<ffffffe001077e8e>] mmc_spi_data_do.isra.2+0x568/0xa7c
[ 2427.690000] [<ffffffe0010783fa>] mmc_spi_request+0x58/0xc6
[ 2427.690000] [<ffffffe001068bbe>] __mmc_start_request+0x4e/0xe2
[ 2427.690000] [<ffffffe001069902>] mmc_start_request+0x78/0xa4
[ 2427.690000] [<ffffffd008307394>] mmc_blk_mq_issue_rq+0x21e/0x64e [mmc_block]
[ 2427.690000] [<ffffffd008307b46>] mmc_mq_queue_rq+0x11a/0x1f0 [mmc_block]
[ 2427.690000] [<ffffffe000ebbf60>] __blk_mq_try_issue_directly+0xca/0x146
[ 2427.690000] [<ffffffe000ebca2c>] blk_mq_request_issue_directly+0x42/0x92
[ 2427.690000] [<ffffffe000ebcaac>] blk_mq_try_issue_list_directly+0x30/0x6e
[ 2427.690000] [<ffffffe000ebfdc2>] blk_mq_sched_insert_requests+0x56/0x80
[ 2427.690000] [<ffffffe000ebc9da>] blk_mq_flush_plug_list+0xd6/0xe6
[ 2427.690000] [<ffffffe000eb3498>] blk_flush_plug_list+0x9e/0x17c
[ 2427.690000] [<ffffffe000ebc2f8>] blk_mq_make_request+0x282/0x2d8
[ 2427.690000] [<ffffffe000eb1d02>] generic_make_request+0xee/0x27a
[ 2427.690000] [<ffffffe000eb1f6e>] submit_bio+0xe0/0x136
[ 2427.690000] [<ffffffe000db10da>] submit_bh_wbc+0x130/0x176
[ 2427.690000] [<ffffffe000db12c6>] __block_write_full_page+0x1a6/0x3a8
[ 2427.690000] [<ffffffe000db167c>] block_write_full_page+0xce/0xe0
[ 2427.690000] [<ffffffe000db40f0>] blkdev_writepage+0x16/0x1e
[ 2427.690000] [<ffffffe000d3c7ca>] __writepage+0x14/0x4c
[ 2427.690000] [<ffffffe000d3d142>] write_cache_pages+0x15c/0x306
[ 2427.690000] [<ffffffe000d3e8a4>] generic_writepages+0x36/0x52
[ 2427.690000] [<ffffffe000db40b4>] blkdev_writepages+0xc/0x14
[ 2427.690000] [<ffffffe000d3f0ec>] do_writepages+0x36/0xa6
[ 2427.690000] [<ffffffe000da96ca>] __writeback_single_inode+0x2e/0x174
[ 2427.690000] [<ffffffe000da9c08>] writeback_sb_inodes+0x1ac/0x33e
[ 2427.690000] [<ffffffe000da9dea>] __writeback_inodes_wb+0x50/0x96
[ 2427.690000] [<ffffffe000daa052>] wb_writeback+0x182/0x186
[ 2427.690000] [<ffffffe000daa67c>] wb_workfn+0x242/0x270
[ 2427.690000] [<ffffffe000c9bb08>] process_one_work+0x16e/0x2ee
[ 2427.690000] [<ffffffe000c9bcde>] worker_thread+0x56/0x42a
[ 2427.690000] [<ffffffe000ca0bdc>] kthread+0xda/0xe8
[ 2427.690000] [<ffffffe000c85730>] ret_from_exception+0x0/0xc

Andreas.

-- 
Andreas Schwab, SUSE Labs, schwab at suse.de
GPG Key fingerprint = 0196 BAD8 1CE9 1970 F4BE  1748 E4D4 88E3 0EEA B9D7
"And now for something completely different."

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Kernel panic - not syncing: corrupted stack end detected inside scheduler
  2018-11-19 11:23 Kernel panic - not syncing: corrupted stack end detected inside scheduler Andreas Schwab
@ 2018-11-19 11:23 ` Andreas Schwab
  2018-11-19 23:46 ` Palmer Dabbelt
  1 sibling, 0 replies; 10+ messages in thread
From: Andreas Schwab @ 2018-11-19 11:23 UTC (permalink / raw)
  To: linux-riscv

Could this be a stack overflow?

[ 2427.690000] Kernel panic - not syncing: corrupted stack end detected inside scheduler
[ 2427.690000]
[ 2427.690000] CPU: 1 PID: 3540 Comm: kworker/u8:2 Not tainted 4.19.0-00014-g978b77fe75 #6
[ 2427.690000] Workqueue: writeback wb_workfn (flush-179:0)
[ 2427.690000] Call Trace:
[ 2427.690000] [<ffffffe000c867d4>] walk_stackframe+0x0/0xa4
[ 2427.690000] [<ffffffe000c869d4>] show_stack+0x2a/0x34
[ 2427.690000] [<ffffffe0011a8800>] dump_stack+0x62/0x7c
[ 2427.690000] [<ffffffe000c8b542>] panic+0xd2/0x1f0
[ 2427.690000] [<ffffffe0011bb25c>] schedule+0x0/0x58
[ 2427.690000] [<ffffffe0011bb470>] preempt_schedule_common+0xe/0x1e
[ 2427.690000] [<ffffffe0011bb4b4>] _cond_resched+0x34/0x40
[ 2427.690000] [<ffffffe001025694>] __spi_pump_messages+0x29e/0x40e
[ 2427.690000] [<ffffffe001025986>] __spi_sync+0x168/0x16a
[ 2427.690000] [<ffffffe001025b86>] spi_sync_locked+0xc/0x14
[ 2427.690000] [<ffffffe001077e8e>] mmc_spi_data_do.isra.2+0x568/0xa7c
[ 2427.690000] [<ffffffe0010783fa>] mmc_spi_request+0x58/0xc6
[ 2427.690000] [<ffffffe001068bbe>] __mmc_start_request+0x4e/0xe2
[ 2427.690000] [<ffffffe001069902>] mmc_start_request+0x78/0xa4
[ 2427.690000] [<ffffffd008307394>] mmc_blk_mq_issue_rq+0x21e/0x64e [mmc_block]
[ 2427.690000] [<ffffffd008307b46>] mmc_mq_queue_rq+0x11a/0x1f0 [mmc_block]
[ 2427.690000] [<ffffffe000ebbf60>] __blk_mq_try_issue_directly+0xca/0x146
[ 2427.690000] [<ffffffe000ebca2c>] blk_mq_request_issue_directly+0x42/0x92
[ 2427.690000] [<ffffffe000ebcaac>] blk_mq_try_issue_list_directly+0x30/0x6e
[ 2427.690000] [<ffffffe000ebfdc2>] blk_mq_sched_insert_requests+0x56/0x80
[ 2427.690000] [<ffffffe000ebc9da>] blk_mq_flush_plug_list+0xd6/0xe6
[ 2427.690000] [<ffffffe000eb3498>] blk_flush_plug_list+0x9e/0x17c
[ 2427.690000] [<ffffffe000ebc2f8>] blk_mq_make_request+0x282/0x2d8
[ 2427.690000] [<ffffffe000eb1d02>] generic_make_request+0xee/0x27a
[ 2427.690000] [<ffffffe000eb1f6e>] submit_bio+0xe0/0x136
[ 2427.690000] [<ffffffe000db10da>] submit_bh_wbc+0x130/0x176
[ 2427.690000] [<ffffffe000db12c6>] __block_write_full_page+0x1a6/0x3a8
[ 2427.690000] [<ffffffe000db167c>] block_write_full_page+0xce/0xe0
[ 2427.690000] [<ffffffe000db40f0>] blkdev_writepage+0x16/0x1e
[ 2427.690000] [<ffffffe000d3c7ca>] __writepage+0x14/0x4c
[ 2427.690000] [<ffffffe000d3d142>] write_cache_pages+0x15c/0x306
[ 2427.690000] [<ffffffe000d3e8a4>] generic_writepages+0x36/0x52
[ 2427.690000] [<ffffffe000db40b4>] blkdev_writepages+0xc/0x14
[ 2427.690000] [<ffffffe000d3f0ec>] do_writepages+0x36/0xa6
[ 2427.690000] [<ffffffe000da96ca>] __writeback_single_inode+0x2e/0x174
[ 2427.690000] [<ffffffe000da9c08>] writeback_sb_inodes+0x1ac/0x33e
[ 2427.690000] [<ffffffe000da9dea>] __writeback_inodes_wb+0x50/0x96
[ 2427.690000] [<ffffffe000daa052>] wb_writeback+0x182/0x186
[ 2427.690000] [<ffffffe000daa67c>] wb_workfn+0x242/0x270
[ 2427.690000] [<ffffffe000c9bb08>] process_one_work+0x16e/0x2ee
[ 2427.690000] [<ffffffe000c9bcde>] worker_thread+0x56/0x42a
[ 2427.690000] [<ffffffe000ca0bdc>] kthread+0xda/0xe8
[ 2427.690000] [<ffffffe000c85730>] ret_from_exception+0x0/0xc

Andreas.

-- 
Andreas Schwab, SUSE Labs, schwab@suse.de
GPG Key fingerprint = 0196 BAD8 1CE9 1970 F4BE  1748 E4D4 88E3 0EEA B9D7
"And now for something completely different."

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Kernel panic - not syncing: corrupted stack end detected inside scheduler
  2018-11-19 11:23 Kernel panic - not syncing: corrupted stack end detected inside scheduler Andreas Schwab
  2018-11-19 11:23 ` Andreas Schwab
@ 2018-11-19 23:46 ` Palmer Dabbelt
  2018-11-19 23:46   ` Palmer Dabbelt
  2018-11-20  8:52   ` Andreas Schwab
  1 sibling, 2 replies; 10+ messages in thread
From: Palmer Dabbelt @ 2018-11-19 23:46 UTC (permalink / raw)
  To: linux-riscv

On Mon, 19 Nov 2018 03:23:14 PST (-0800), schwab at suse.de wrote:
> Could this be a stack overflow?

Yes.

> [ 2427.690000] Kernel panic - not syncing: corrupted stack end detected inside scheduler
> [ 2427.690000]
> [ 2427.690000] CPU: 1 PID: 3540 Comm: kworker/u8:2 Not tainted 4.19.0-00014-g978b77fe75 #6
> [ 2427.690000] Workqueue: writeback wb_workfn (flush-179:0)
> [ 2427.690000] Call Trace:
> [ 2427.690000] [<ffffffe000c867d4>] walk_stackframe+0x0/0xa4
> [ 2427.690000] [<ffffffe000c869d4>] show_stack+0x2a/0x34
> [ 2427.690000] [<ffffffe0011a8800>] dump_stack+0x62/0x7c
> [ 2427.690000] [<ffffffe000c8b542>] panic+0xd2/0x1f0
> [ 2427.690000] [<ffffffe0011bb25c>] schedule+0x0/0x58
> [ 2427.690000] [<ffffffe0011bb470>] preempt_schedule_common+0xe/0x1e
> [ 2427.690000] [<ffffffe0011bb4b4>] _cond_resched+0x34/0x40
> [ 2427.690000] [<ffffffe001025694>] __spi_pump_messages+0x29e/0x40e
> [ 2427.690000] [<ffffffe001025986>] __spi_sync+0x168/0x16a
> [ 2427.690000] [<ffffffe001025b86>] spi_sync_locked+0xc/0x14
> [ 2427.690000] [<ffffffe001077e8e>] mmc_spi_data_do.isra.2+0x568/0xa7c
> [ 2427.690000] [<ffffffe0010783fa>] mmc_spi_request+0x58/0xc6
> [ 2427.690000] [<ffffffe001068bbe>] __mmc_start_request+0x4e/0xe2
> [ 2427.690000] [<ffffffe001069902>] mmc_start_request+0x78/0xa4
> [ 2427.690000] [<ffffffd008307394>] mmc_blk_mq_issue_rq+0x21e/0x64e [mmc_block]
> [ 2427.690000] [<ffffffd008307b46>] mmc_mq_queue_rq+0x11a/0x1f0 [mmc_block]
> [ 2427.690000] [<ffffffe000ebbf60>] __blk_mq_try_issue_directly+0xca/0x146
> [ 2427.690000] [<ffffffe000ebca2c>] blk_mq_request_issue_directly+0x42/0x92
> [ 2427.690000] [<ffffffe000ebcaac>] blk_mq_try_issue_list_directly+0x30/0x6e
> [ 2427.690000] [<ffffffe000ebfdc2>] blk_mq_sched_insert_requests+0x56/0x80
> [ 2427.690000] [<ffffffe000ebc9da>] blk_mq_flush_plug_list+0xd6/0xe6
> [ 2427.690000] [<ffffffe000eb3498>] blk_flush_plug_list+0x9e/0x17c
> [ 2427.690000] [<ffffffe000ebc2f8>] blk_mq_make_request+0x282/0x2d8
> [ 2427.690000] [<ffffffe000eb1d02>] generic_make_request+0xee/0x27a
> [ 2427.690000] [<ffffffe000eb1f6e>] submit_bio+0xe0/0x136
> [ 2427.690000] [<ffffffe000db10da>] submit_bh_wbc+0x130/0x176
> [ 2427.690000] [<ffffffe000db12c6>] __block_write_full_page+0x1a6/0x3a8
> [ 2427.690000] [<ffffffe000db167c>] block_write_full_page+0xce/0xe0
> [ 2427.690000] [<ffffffe000db40f0>] blkdev_writepage+0x16/0x1e
> [ 2427.690000] [<ffffffe000d3c7ca>] __writepage+0x14/0x4c
> [ 2427.690000] [<ffffffe000d3d142>] write_cache_pages+0x15c/0x306
> [ 2427.690000] [<ffffffe000d3e8a4>] generic_writepages+0x36/0x52
> [ 2427.690000] [<ffffffe000db40b4>] blkdev_writepages+0xc/0x14
> [ 2427.690000] [<ffffffe000d3f0ec>] do_writepages+0x36/0xa6
> [ 2427.690000] [<ffffffe000da96ca>] __writeback_single_inode+0x2e/0x174
> [ 2427.690000] [<ffffffe000da9c08>] writeback_sb_inodes+0x1ac/0x33e
> [ 2427.690000] [<ffffffe000da9dea>] __writeback_inodes_wb+0x50/0x96
> [ 2427.690000] [<ffffffe000daa052>] wb_writeback+0x182/0x186
> [ 2427.690000] [<ffffffe000daa67c>] wb_workfn+0x242/0x270
> [ 2427.690000] [<ffffffe000c9bb08>] process_one_work+0x16e/0x2ee
> [ 2427.690000] [<ffffffe000c9bcde>] worker_thread+0x56/0x42a
> [ 2427.690000] [<ffffffe000ca0bdc>] kthread+0xda/0xe8
> [ 2427.690000] [<ffffffe000c85730>] ret_from_exception+0x0/0xc

It smells like the issue is somewhere in the SPI driver, which is known to be 
buggy.  I don't see anything specific to indicate this is a stack overflow in 
this stack trace (the stack stuff above panic is just part of the printing).

Sorry I can't be more specific.  Does this require hardware to manifest?

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Kernel panic - not syncing: corrupted stack end detected inside scheduler
  2018-11-19 23:46 ` Palmer Dabbelt
@ 2018-11-19 23:46   ` Palmer Dabbelt
  2018-11-20  8:52   ` Andreas Schwab
  1 sibling, 0 replies; 10+ messages in thread
From: Palmer Dabbelt @ 2018-11-19 23:46 UTC (permalink / raw)
  To: schwab; +Cc: linux-riscv

On Mon, 19 Nov 2018 03:23:14 PST (-0800), schwab@suse.de wrote:
> Could this be a stack overflow?

Yes.

> [ 2427.690000] Kernel panic - not syncing: corrupted stack end detected inside scheduler
> [ 2427.690000]
> [ 2427.690000] CPU: 1 PID: 3540 Comm: kworker/u8:2 Not tainted 4.19.0-00014-g978b77fe75 #6
> [ 2427.690000] Workqueue: writeback wb_workfn (flush-179:0)
> [ 2427.690000] Call Trace:
> [ 2427.690000] [<ffffffe000c867d4>] walk_stackframe+0x0/0xa4
> [ 2427.690000] [<ffffffe000c869d4>] show_stack+0x2a/0x34
> [ 2427.690000] [<ffffffe0011a8800>] dump_stack+0x62/0x7c
> [ 2427.690000] [<ffffffe000c8b542>] panic+0xd2/0x1f0
> [ 2427.690000] [<ffffffe0011bb25c>] schedule+0x0/0x58
> [ 2427.690000] [<ffffffe0011bb470>] preempt_schedule_common+0xe/0x1e
> [ 2427.690000] [<ffffffe0011bb4b4>] _cond_resched+0x34/0x40
> [ 2427.690000] [<ffffffe001025694>] __spi_pump_messages+0x29e/0x40e
> [ 2427.690000] [<ffffffe001025986>] __spi_sync+0x168/0x16a
> [ 2427.690000] [<ffffffe001025b86>] spi_sync_locked+0xc/0x14
> [ 2427.690000] [<ffffffe001077e8e>] mmc_spi_data_do.isra.2+0x568/0xa7c
> [ 2427.690000] [<ffffffe0010783fa>] mmc_spi_request+0x58/0xc6
> [ 2427.690000] [<ffffffe001068bbe>] __mmc_start_request+0x4e/0xe2
> [ 2427.690000] [<ffffffe001069902>] mmc_start_request+0x78/0xa4
> [ 2427.690000] [<ffffffd008307394>] mmc_blk_mq_issue_rq+0x21e/0x64e [mmc_block]
> [ 2427.690000] [<ffffffd008307b46>] mmc_mq_queue_rq+0x11a/0x1f0 [mmc_block]
> [ 2427.690000] [<ffffffe000ebbf60>] __blk_mq_try_issue_directly+0xca/0x146
> [ 2427.690000] [<ffffffe000ebca2c>] blk_mq_request_issue_directly+0x42/0x92
> [ 2427.690000] [<ffffffe000ebcaac>] blk_mq_try_issue_list_directly+0x30/0x6e
> [ 2427.690000] [<ffffffe000ebfdc2>] blk_mq_sched_insert_requests+0x56/0x80
> [ 2427.690000] [<ffffffe000ebc9da>] blk_mq_flush_plug_list+0xd6/0xe6
> [ 2427.690000] [<ffffffe000eb3498>] blk_flush_plug_list+0x9e/0x17c
> [ 2427.690000] [<ffffffe000ebc2f8>] blk_mq_make_request+0x282/0x2d8
> [ 2427.690000] [<ffffffe000eb1d02>] generic_make_request+0xee/0x27a
> [ 2427.690000] [<ffffffe000eb1f6e>] submit_bio+0xe0/0x136
> [ 2427.690000] [<ffffffe000db10da>] submit_bh_wbc+0x130/0x176
> [ 2427.690000] [<ffffffe000db12c6>] __block_write_full_page+0x1a6/0x3a8
> [ 2427.690000] [<ffffffe000db167c>] block_write_full_page+0xce/0xe0
> [ 2427.690000] [<ffffffe000db40f0>] blkdev_writepage+0x16/0x1e
> [ 2427.690000] [<ffffffe000d3c7ca>] __writepage+0x14/0x4c
> [ 2427.690000] [<ffffffe000d3d142>] write_cache_pages+0x15c/0x306
> [ 2427.690000] [<ffffffe000d3e8a4>] generic_writepages+0x36/0x52
> [ 2427.690000] [<ffffffe000db40b4>] blkdev_writepages+0xc/0x14
> [ 2427.690000] [<ffffffe000d3f0ec>] do_writepages+0x36/0xa6
> [ 2427.690000] [<ffffffe000da96ca>] __writeback_single_inode+0x2e/0x174
> [ 2427.690000] [<ffffffe000da9c08>] writeback_sb_inodes+0x1ac/0x33e
> [ 2427.690000] [<ffffffe000da9dea>] __writeback_inodes_wb+0x50/0x96
> [ 2427.690000] [<ffffffe000daa052>] wb_writeback+0x182/0x186
> [ 2427.690000] [<ffffffe000daa67c>] wb_workfn+0x242/0x270
> [ 2427.690000] [<ffffffe000c9bb08>] process_one_work+0x16e/0x2ee
> [ 2427.690000] [<ffffffe000c9bcde>] worker_thread+0x56/0x42a
> [ 2427.690000] [<ffffffe000ca0bdc>] kthread+0xda/0xe8
> [ 2427.690000] [<ffffffe000c85730>] ret_from_exception+0x0/0xc

It smells like the issue is somewhere in the SPI driver, which is known to be 
buggy.  I don't see anything specific to indicate this is a stack overflow in 
this stack trace (the stack stuff above panic is just part of the printing).

Sorry I can't be more specific.  Does this require hardware to manifest?

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Kernel panic - not syncing: corrupted stack end detected inside scheduler
  2018-11-19 23:46 ` Palmer Dabbelt
  2018-11-19 23:46   ` Palmer Dabbelt
@ 2018-11-20  8:52   ` Andreas Schwab
  2018-11-20  8:52     ` Andreas Schwab
  2018-11-20 17:29     ` Palmer Dabbelt
  1 sibling, 2 replies; 10+ messages in thread
From: Andreas Schwab @ 2018-11-20  8:52 UTC (permalink / raw)
  To: linux-riscv

On Nov 19 2018, Palmer Dabbelt <palmer@sifive.com> wrote:

> Sorry I can't be more specific.  Does this require hardware to manifest?

This was on the hifive.

Andreas.

-- 
Andreas Schwab, SUSE Labs, schwab at suse.de
GPG Key fingerprint = 0196 BAD8 1CE9 1970 F4BE  1748 E4D4 88E3 0EEA B9D7
"And now for something completely different."

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Kernel panic - not syncing: corrupted stack end detected inside scheduler
  2018-11-20  8:52   ` Andreas Schwab
@ 2018-11-20  8:52     ` Andreas Schwab
  2018-11-20 17:29     ` Palmer Dabbelt
  1 sibling, 0 replies; 10+ messages in thread
From: Andreas Schwab @ 2018-11-20  8:52 UTC (permalink / raw)
  To: Palmer Dabbelt; +Cc: linux-riscv

On Nov 19 2018, Palmer Dabbelt <palmer@sifive.com> wrote:

> Sorry I can't be more specific.  Does this require hardware to manifest?

This was on the hifive.

Andreas.

-- 
Andreas Schwab, SUSE Labs, schwab@suse.de
GPG Key fingerprint = 0196 BAD8 1CE9 1970 F4BE  1748 E4D4 88E3 0EEA B9D7
"And now for something completely different."

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Kernel panic - not syncing: corrupted stack end detected inside scheduler
  2018-11-20  8:52   ` Andreas Schwab
  2018-11-20  8:52     ` Andreas Schwab
@ 2018-11-20 17:29     ` Palmer Dabbelt
  2018-11-20 17:29       ` Palmer Dabbelt
  2018-11-21  8:55       ` Andreas Schwab
  1 sibling, 2 replies; 10+ messages in thread
From: Palmer Dabbelt @ 2018-11-20 17:29 UTC (permalink / raw)
  To: linux-riscv

On Tue, 20 Nov 2018 00:52:42 PST (-0800), schwab at suse.de wrote:
> On Nov 19 2018, Palmer Dabbelt <palmer@sifive.com> wrote:
>
>> Sorry I can't be more specific.  Does this require hardware to manifest?
>
> This was on the hifive.

OK, well, we know there are at least some issues when using an SD card via the 
SPI interface as a disk, but nobody has had time to track it down yet.  Right 
now the only reproducer is to just write a lot of data, which is a pain to 
debug.  The current issue manifests as a hang in some kernel thread that has a 
MMC-like name.  What you're seeing may be the same or a different issue.

Do you have a better way to reproduce this than to just hammer the filesystem?

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Kernel panic - not syncing: corrupted stack end detected inside scheduler
  2018-11-20 17:29     ` Palmer Dabbelt
@ 2018-11-20 17:29       ` Palmer Dabbelt
  2018-11-21  8:55       ` Andreas Schwab
  1 sibling, 0 replies; 10+ messages in thread
From: Palmer Dabbelt @ 2018-11-20 17:29 UTC (permalink / raw)
  To: schwab; +Cc: linux-riscv

On Tue, 20 Nov 2018 00:52:42 PST (-0800), schwab@suse.de wrote:
> On Nov 19 2018, Palmer Dabbelt <palmer@sifive.com> wrote:
>
>> Sorry I can't be more specific.  Does this require hardware to manifest?
>
> This was on the hifive.

OK, well, we know there are at least some issues when using an SD card via the 
SPI interface as a disk, but nobody has had time to track it down yet.  Right 
now the only reproducer is to just write a lot of data, which is a pain to 
debug.  The current issue manifests as a hang in some kernel thread that has a 
MMC-like name.  What you're seeing may be the same or a different issue.

Do you have a better way to reproduce this than to just hammer the filesystem?

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Kernel panic - not syncing: corrupted stack end detected inside scheduler
  2018-11-20 17:29     ` Palmer Dabbelt
  2018-11-20 17:29       ` Palmer Dabbelt
@ 2018-11-21  8:55       ` Andreas Schwab
  2018-11-21  8:55         ` Andreas Schwab
  1 sibling, 1 reply; 10+ messages in thread
From: Andreas Schwab @ 2018-11-21  8:55 UTC (permalink / raw)
  To: linux-riscv

On Nov 20 2018, Palmer Dabbelt <palmer@sifive.com> wrote:

> Do you have a better way to reproduce this than to just hammer the filesystem?

I don't hammer the filesystem.  Most of the data is on nfs and /tmp is
on tmpfs.

Andreas.

-- 
Andreas Schwab, SUSE Labs, schwab at suse.de
GPG Key fingerprint = 0196 BAD8 1CE9 1970 F4BE  1748 E4D4 88E3 0EEA B9D7
"And now for something completely different."

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Kernel panic - not syncing: corrupted stack end detected inside scheduler
  2018-11-21  8:55       ` Andreas Schwab
@ 2018-11-21  8:55         ` Andreas Schwab
  0 siblings, 0 replies; 10+ messages in thread
From: Andreas Schwab @ 2018-11-21  8:55 UTC (permalink / raw)
  To: Palmer Dabbelt; +Cc: linux-riscv

On Nov 20 2018, Palmer Dabbelt <palmer@sifive.com> wrote:

> Do you have a better way to reproduce this than to just hammer the filesystem?

I don't hammer the filesystem.  Most of the data is on nfs and /tmp is
on tmpfs.

Andreas.

-- 
Andreas Schwab, SUSE Labs, schwab@suse.de
GPG Key fingerprint = 0196 BAD8 1CE9 1970 F4BE  1748 E4D4 88E3 0EEA B9D7
"And now for something completely different."

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2018-11-21  8:55 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-19 11:23 Kernel panic - not syncing: corrupted stack end detected inside scheduler Andreas Schwab
2018-11-19 11:23 ` Andreas Schwab
2018-11-19 23:46 ` Palmer Dabbelt
2018-11-19 23:46   ` Palmer Dabbelt
2018-11-20  8:52   ` Andreas Schwab
2018-11-20  8:52     ` Andreas Schwab
2018-11-20 17:29     ` Palmer Dabbelt
2018-11-20 17:29       ` Palmer Dabbelt
2018-11-21  8:55       ` Andreas Schwab
2018-11-21  8:55         ` Andreas Schwab

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).